Test Report: Docker_Linux_crio_arm64 21966

                    
                      f7c9a93757611cb83a7bfb680dda9add42d627cb:2025-11-23:42464
                    
                

Test fail (36/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.3
35 TestAddons/parallel/Registry 15.15
36 TestAddons/parallel/RegistryCreds 0.5
37 TestAddons/parallel/Ingress 146.21
38 TestAddons/parallel/InspektorGadget 6.29
39 TestAddons/parallel/MetricsServer 5.4
41 TestAddons/parallel/CSI 56.31
42 TestAddons/parallel/Headlamp 3.41
43 TestAddons/parallel/CloudSpanner 5.33
44 TestAddons/parallel/LocalPath 8.36
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 6.27
97 TestFunctional/parallel/ServiceCmdConnect 603.47
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.95
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.29
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
146 TestFunctional/parallel/ServiceCmd/DeployApp 600.72
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
153 TestFunctional/parallel/ServiceCmd/Format 0.54
154 TestFunctional/parallel/ServiceCmd/URL 0.4
191 TestJSONOutput/pause/Command 1.58
197 TestJSONOutput/unpause/Command 2.31
282 TestPause/serial/Pause 6.51
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.42
304 TestStartStop/group/old-k8s-version/serial/Pause 6.85
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.47
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.26
322 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.26
328 TestStartStop/group/embed-certs/serial/Pause 7.4
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.94
334 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.24
344 TestStartStop/group/newest-cni/serial/Pause 8.47
349 TestStartStop/group/no-preload/serial/Pause 6.35
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable volcano --alsologtostderr -v=1: exit status 11 (295.133437ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:58:49.965599 1049819 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:58:49.966458 1049819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:58:49.966524 1049819 out.go:374] Setting ErrFile to fd 2...
	I1123 07:58:49.966546 1049819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:58:49.966922 1049819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 07:58:49.967792 1049819 mustload.go:66] Loading cluster: addons-782760
	I1123 07:58:49.968341 1049819 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:58:49.968397 1049819 addons.go:622] checking whether the cluster is paused
	I1123 07:58:49.968575 1049819 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:58:49.968626 1049819 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:58:49.969439 1049819 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:58:49.988534 1049819 ssh_runner.go:195] Run: systemctl --version
	I1123 07:58:49.988589 1049819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:58:50.009744 1049819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:58:50.121985 1049819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:58:50.122106 1049819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:58:50.152393 1049819 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 07:58:50.152416 1049819 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 07:58:50.152421 1049819 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 07:58:50.152426 1049819 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 07:58:50.152429 1049819 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 07:58:50.152435 1049819 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 07:58:50.152438 1049819 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 07:58:50.152442 1049819 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 07:58:50.152445 1049819 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 07:58:50.152451 1049819 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 07:58:50.152455 1049819 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 07:58:50.152458 1049819 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 07:58:50.152461 1049819 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 07:58:50.152465 1049819 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 07:58:50.152469 1049819 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 07:58:50.152474 1049819 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 07:58:50.152478 1049819 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 07:58:50.152482 1049819 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 07:58:50.152485 1049819 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 07:58:50.152488 1049819 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 07:58:50.152494 1049819 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 07:58:50.152500 1049819 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 07:58:50.152503 1049819 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 07:58:50.152507 1049819 cri.go:89] found id: ""
	I1123 07:58:50.152559 1049819 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:58:50.167673 1049819 out.go:203] 
	W1123 07:58:50.170575 1049819 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:58:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:58:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:58:50.170602 1049819 out.go:285] * 
	* 
	W1123 07:58:50.178954 1049819 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:58:50.182058 1049819 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.675592ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002729614s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003623446s
addons_test.go:392: (dbg) Run:  kubectl --context addons-782760 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-782760 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-782760 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.561781021s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 ip
2025/11/23 07:59:15 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable registry --alsologtostderr -v=1: exit status 11 (317.072683ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:59:15.520292 1050367 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:59:15.521725 1050367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:15.521772 1050367 out.go:374] Setting ErrFile to fd 2...
	I1123 07:59:15.521796 1050367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:15.522100 1050367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 07:59:15.522468 1050367 mustload.go:66] Loading cluster: addons-782760
	I1123 07:59:15.522905 1050367 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:15.522947 1050367 addons.go:622] checking whether the cluster is paused
	I1123 07:59:15.523094 1050367 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:15.523125 1050367 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:59:15.523676 1050367 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:59:15.555666 1050367 ssh_runner.go:195] Run: systemctl --version
	I1123 07:59:15.555717 1050367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:59:15.591400 1050367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:59:15.701630 1050367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:59:15.701714 1050367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:59:15.733968 1050367 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 07:59:15.733996 1050367 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 07:59:15.734001 1050367 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 07:59:15.734005 1050367 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 07:59:15.734008 1050367 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 07:59:15.734012 1050367 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 07:59:15.734015 1050367 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 07:59:15.734018 1050367 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 07:59:15.734021 1050367 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 07:59:15.734027 1050367 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 07:59:15.734031 1050367 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 07:59:15.734034 1050367 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 07:59:15.734037 1050367 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 07:59:15.734040 1050367 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 07:59:15.734049 1050367 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 07:59:15.734054 1050367 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 07:59:15.734058 1050367 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 07:59:15.734067 1050367 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 07:59:15.734071 1050367 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 07:59:15.734074 1050367 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 07:59:15.734079 1050367 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 07:59:15.734085 1050367 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 07:59:15.734088 1050367 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 07:59:15.734091 1050367 cri.go:89] found id: ""
	I1123 07:59:15.734142 1050367 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:59:15.751098 1050367 out.go:203] 
	W1123 07:59:15.753997 1050367 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:59:15.754025 1050367 out.go:285] * 
	* 
	W1123 07:59:15.762134 1050367 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:59:15.765284 1050367 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.15s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.5s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.627766ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-782760
addons_test.go:332: (dbg) Run:  kubectl --context addons-782760 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (262.914805ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:00:17.694052 1052455 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:00:17.699698 1052455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:00:17.699766 1052455 out.go:374] Setting ErrFile to fd 2...
	I1123 08:00:17.699792 1052455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:00:17.700129 1052455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:00:17.701106 1052455 mustload.go:66] Loading cluster: addons-782760
	I1123 08:00:17.703459 1052455 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:00:17.703905 1052455 addons.go:622] checking whether the cluster is paused
	I1123 08:00:17.704118 1052455 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:00:17.704157 1052455 host.go:66] Checking if "addons-782760" exists ...
	I1123 08:00:17.704747 1052455 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 08:00:17.721690 1052455 ssh_runner.go:195] Run: systemctl --version
	I1123 08:00:17.721745 1052455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 08:00:17.740421 1052455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 08:00:17.846150 1052455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:00:17.846254 1052455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:00:17.874942 1052455 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 08:00:17.874969 1052455 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 08:00:17.874974 1052455 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 08:00:17.874977 1052455 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 08:00:17.874981 1052455 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 08:00:17.874984 1052455 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 08:00:17.874987 1052455 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 08:00:17.874989 1052455 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 08:00:17.874993 1052455 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 08:00:17.875007 1052455 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 08:00:17.875011 1052455 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 08:00:17.875014 1052455 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 08:00:17.875017 1052455 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 08:00:17.875021 1052455 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 08:00:17.875029 1052455 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 08:00:17.875034 1052455 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 08:00:17.875040 1052455 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 08:00:17.875043 1052455 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 08:00:17.875047 1052455 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 08:00:17.875049 1052455 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 08:00:17.875055 1052455 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 08:00:17.875058 1052455 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 08:00:17.875061 1052455 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 08:00:17.875064 1052455 cri.go:89] found id: ""
	I1123 08:00:17.875126 1052455 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:00:17.890193 1052455 out.go:203] 
	W1123 08:00:17.893292 1052455 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:00:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:00:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:00:17.893330 1052455 out.go:285] * 
	* 
	W1123 08:00:17.902040 1052455 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:00:17.905263 1052455 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.50s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-782760 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-782760 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-782760 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [6ac94bc8-23ad-488f-b1da-c590f6a76a84] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [6ac94bc8-23ad-488f-b1da-c590f6a76a84] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003038566s
I1123 07:59:46.246980 1043159 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.259205625s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-782760 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-782760
helpers_test.go:243: (dbg) docker inspect addons-782760:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185",
	        "Created": "2025-11-23T07:56:27.962209564Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1044325,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T07:56:28.049584421Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185/hosts",
	        "LogPath": "/var/lib/docker/containers/3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185/3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185-json.log",
	        "Name": "/addons-782760",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-782760:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-782760",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185",
	                "LowerDir": "/var/lib/docker/overlay2/1179f3f67fd1d0ccdebabebf16620c73061bc6ae405115f7f375f734b6a4e83d-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1179f3f67fd1d0ccdebabebf16620c73061bc6ae405115f7f375f734b6a4e83d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1179f3f67fd1d0ccdebabebf16620c73061bc6ae405115f7f375f734b6a4e83d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1179f3f67fd1d0ccdebabebf16620c73061bc6ae405115f7f375f734b6a4e83d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-782760",
	                "Source": "/var/lib/docker/volumes/addons-782760/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-782760",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-782760",
	                "name.minikube.sigs.k8s.io": "addons-782760",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c9493e00d837ec3222fe395926668105071dfbc85dde8c905b0a3cbd0e3b56b8",
	            "SandboxKey": "/var/run/docker/netns/c9493e00d837",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34227"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34228"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34231"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34229"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34230"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-782760": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:34:3e:80:d9:15",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "65d6399dab490bbc161a2856eb90bdfcc5a05536af204af8a801042873393672",
	                    "EndpointID": "fb457c8d45285dd9dc8de0ea58cbf0d751c663419b15d108372decc912d3c13b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-782760",
	                        "3e0fb2f2cb2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-782760 -n addons-782760
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-782760 logs -n 25: (1.760854215s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-178439                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-178439 │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │ 23 Nov 25 07:56 UTC │
	│ start   │ --download-only -p binary-mirror-804601 --alsologtostderr --binary-mirror http://127.0.0.1:32857 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-804601   │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │                     │
	│ delete  │ -p binary-mirror-804601                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-804601   │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │ 23 Nov 25 07:56 UTC │
	│ addons  │ enable dashboard -p addons-782760                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-782760                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │                     │
	│ start   │ -p addons-782760 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │ 23 Nov 25 07:58 UTC │
	│ addons  │ addons-782760 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:58 UTC │                     │
	│ addons  │ addons-782760 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ addons  │ addons-782760 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ addons  │ addons-782760 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ ip      │ addons-782760 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │ 23 Nov 25 07:59 UTC │
	│ addons  │ addons-782760 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ ssh     │ addons-782760 ssh cat /opt/local-path-provisioner/pvc-4edddf59-348f-4660-91bb-3a71fe1ac723_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │ 23 Nov 25 07:59 UTC │
	│ addons  │ addons-782760 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ addons  │ enable headlamp -p addons-782760 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ addons  │ addons-782760 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ addons  │ addons-782760 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ addons  │ addons-782760 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ addons  │ addons-782760 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ ssh     │ addons-782760 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ addons  │ addons-782760 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 08:00 UTC │                     │
	│ addons  │ addons-782760 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 08:00 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-782760                                                                                                                                                                                                                                                                                                                                                                                           │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 08:00 UTC │ 23 Nov 25 08:00 UTC │
	│ addons  │ addons-782760 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 08:00 UTC │                     │
	│ ip      │ addons-782760 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 08:01 UTC │ 23 Nov 25 08:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 07:56:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 07:56:03.114980 1043921 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:56:03.115533 1043921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:56:03.115554 1043921 out.go:374] Setting ErrFile to fd 2...
	I1123 07:56:03.115561 1043921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:56:03.115993 1043921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 07:56:03.116675 1043921 out.go:368] Setting JSON to false
	I1123 07:56:03.117679 1043921 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":31108,"bootTime":1763853455,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 07:56:03.117794 1043921 start.go:143] virtualization:  
	I1123 07:56:03.121126 1043921 out.go:179] * [addons-782760] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 07:56:03.124982 1043921 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 07:56:03.125070 1043921 notify.go:221] Checking for updates...
	I1123 07:56:03.130967 1043921 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 07:56:03.133919 1043921 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 07:56:03.136905 1043921 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 07:56:03.139934 1043921 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 07:56:03.142927 1043921 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 07:56:03.146074 1043921 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 07:56:03.176755 1043921 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 07:56:03.176874 1043921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:56:03.229560 1043921 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 07:56:03.220292818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 07:56:03.229700 1043921 docker.go:319] overlay module found
	I1123 07:56:03.232935 1043921 out.go:179] * Using the docker driver based on user configuration
	I1123 07:56:03.235763 1043921 start.go:309] selected driver: docker
	I1123 07:56:03.235785 1043921 start.go:927] validating driver "docker" against <nil>
	I1123 07:56:03.235799 1043921 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 07:56:03.236606 1043921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:56:03.292154 1043921 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 07:56:03.283293618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 07:56:03.292306 1043921 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 07:56:03.292533 1043921 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 07:56:03.295391 1043921 out.go:179] * Using Docker driver with root privileges
	I1123 07:56:03.298217 1043921 cni.go:84] Creating CNI manager for ""
	I1123 07:56:03.298301 1043921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:56:03.298316 1043921 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 07:56:03.298405 1043921 start.go:353] cluster config:
	{Name:addons-782760 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-782760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1123 07:56:03.301465 1043921 out.go:179] * Starting "addons-782760" primary control-plane node in "addons-782760" cluster
	I1123 07:56:03.304249 1043921 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 07:56:03.307256 1043921 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 07:56:03.310111 1043921 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:56:03.310161 1043921 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 07:56:03.310174 1043921 cache.go:65] Caching tarball of preloaded images
	I1123 07:56:03.310191 1043921 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 07:56:03.310271 1043921 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 07:56:03.310281 1043921 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 07:56:03.310628 1043921 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/config.json ...
	I1123 07:56:03.310648 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/config.json: {Name:mkba92e87d8837cd4e3d5581be5a67ad0a2c349b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:03.326180 1043921 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 07:56:03.326317 1043921 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 07:56:03.326339 1043921 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 07:56:03.326344 1043921 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 07:56:03.326356 1043921 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 07:56:03.326361 1043921 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1123 07:56:20.838645 1043921 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1123 07:56:20.838685 1043921 cache.go:243] Successfully downloaded all kic artifacts
	I1123 07:56:20.838724 1043921 start.go:360] acquireMachinesLock for addons-782760: {Name:mkbe72898b248d290d2a77e20e593673429036d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 07:56:20.838853 1043921 start.go:364] duration metric: took 92.051µs to acquireMachinesLock for "addons-782760"
	I1123 07:56:20.838883 1043921 start.go:93] Provisioning new machine with config: &{Name:addons-782760 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-782760 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 07:56:20.838958 1043921 start.go:125] createHost starting for "" (driver="docker")
	I1123 07:56:20.842374 1043921 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1123 07:56:20.842633 1043921 start.go:159] libmachine.API.Create for "addons-782760" (driver="docker")
	I1123 07:56:20.842674 1043921 client.go:173] LocalClient.Create starting
	I1123 07:56:20.842822 1043921 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem
	I1123 07:56:20.953318 1043921 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem
	I1123 07:56:21.301270 1043921 cli_runner.go:164] Run: docker network inspect addons-782760 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 07:56:21.316967 1043921 cli_runner.go:211] docker network inspect addons-782760 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 07:56:21.317051 1043921 network_create.go:284] running [docker network inspect addons-782760] to gather additional debugging logs...
	I1123 07:56:21.317076 1043921 cli_runner.go:164] Run: docker network inspect addons-782760
	W1123 07:56:21.332834 1043921 cli_runner.go:211] docker network inspect addons-782760 returned with exit code 1
	I1123 07:56:21.332867 1043921 network_create.go:287] error running [docker network inspect addons-782760]: docker network inspect addons-782760: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-782760 not found
	I1123 07:56:21.332882 1043921 network_create.go:289] output of [docker network inspect addons-782760]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-782760 not found
	
	** /stderr **
	I1123 07:56:21.332986 1043921 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 07:56:21.348348 1043921 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b12090}
	I1123 07:56:21.348389 1043921 network_create.go:124] attempt to create docker network addons-782760 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1123 07:56:21.348448 1043921 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-782760 addons-782760
	I1123 07:56:21.417389 1043921 network_create.go:108] docker network addons-782760 192.168.49.0/24 created
	I1123 07:56:21.417422 1043921 kic.go:121] calculated static IP "192.168.49.2" for the "addons-782760" container
	I1123 07:56:21.417505 1043921 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 07:56:21.435433 1043921 cli_runner.go:164] Run: docker volume create addons-782760 --label name.minikube.sigs.k8s.io=addons-782760 --label created_by.minikube.sigs.k8s.io=true
	I1123 07:56:21.457235 1043921 oci.go:103] Successfully created a docker volume addons-782760
	I1123 07:56:21.457321 1043921 cli_runner.go:164] Run: docker run --rm --name addons-782760-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-782760 --entrypoint /usr/bin/test -v addons-782760:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 07:56:23.506471 1043921 cli_runner.go:217] Completed: docker run --rm --name addons-782760-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-782760 --entrypoint /usr/bin/test -v addons-782760:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.049101668s)
	I1123 07:56:23.506516 1043921 oci.go:107] Successfully prepared a docker volume addons-782760
	I1123 07:56:23.506558 1043921 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:56:23.506568 1043921 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 07:56:23.506630 1043921 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-782760:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 07:56:27.898734 1043921 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-782760:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.392064261s)
	I1123 07:56:27.898782 1043921 kic.go:203] duration metric: took 4.392210144s to extract preloaded images to volume ...
	W1123 07:56:27.898915 1043921 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 07:56:27.899023 1043921 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 07:56:27.948254 1043921 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-782760 --name addons-782760 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-782760 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-782760 --network addons-782760 --ip 192.168.49.2 --volume addons-782760:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 07:56:28.253469 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Running}}
	I1123 07:56:28.271338 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:28.293044 1043921 cli_runner.go:164] Run: docker exec addons-782760 stat /var/lib/dpkg/alternatives/iptables
	I1123 07:56:28.347618 1043921 oci.go:144] the created container "addons-782760" has a running status.
	I1123 07:56:28.347652 1043921 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa...
	I1123 07:56:29.017865 1043921 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 07:56:29.036094 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:29.051418 1043921 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 07:56:29.051452 1043921 kic_runner.go:114] Args: [docker exec --privileged addons-782760 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 07:56:29.091781 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:29.107945 1043921 machine.go:94] provisionDockerMachine start ...
	I1123 07:56:29.108039 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:29.124681 1043921 main.go:143] libmachine: Using SSH client type: native
	I1123 07:56:29.125000 1043921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1123 07:56:29.125013 1043921 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 07:56:29.125690 1043921 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 07:56:32.274456 1043921 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-782760
	
	I1123 07:56:32.274481 1043921 ubuntu.go:182] provisioning hostname "addons-782760"
	I1123 07:56:32.274546 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:32.293882 1043921 main.go:143] libmachine: Using SSH client type: native
	I1123 07:56:32.294208 1043921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1123 07:56:32.294224 1043921 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-782760 && echo "addons-782760" | sudo tee /etc/hostname
	I1123 07:56:32.452638 1043921 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-782760
	
	I1123 07:56:32.452723 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:32.469284 1043921 main.go:143] libmachine: Using SSH client type: native
	I1123 07:56:32.469632 1043921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1123 07:56:32.469656 1043921 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-782760' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-782760/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-782760' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 07:56:32.619222 1043921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 07:56:32.619245 1043921 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 07:56:32.619276 1043921 ubuntu.go:190] setting up certificates
	I1123 07:56:32.619285 1043921 provision.go:84] configureAuth start
	I1123 07:56:32.619344 1043921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-782760
	I1123 07:56:32.636466 1043921 provision.go:143] copyHostCerts
	I1123 07:56:32.636555 1043921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 07:56:32.636691 1043921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 07:56:32.636765 1043921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 07:56:32.636830 1043921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.addons-782760 san=[127.0.0.1 192.168.49.2 addons-782760 localhost minikube]
	I1123 07:56:32.710139 1043921 provision.go:177] copyRemoteCerts
	I1123 07:56:32.710204 1043921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 07:56:32.710242 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:32.725949 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:32.830768 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 07:56:32.847702 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 07:56:32.864986 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 07:56:32.881891 1043921 provision.go:87] duration metric: took 262.579104ms to configureAuth
	I1123 07:56:32.881919 1043921 ubuntu.go:206] setting minikube options for container-runtime
	I1123 07:56:32.882144 1043921 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:56:32.882265 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:32.900974 1043921 main.go:143] libmachine: Using SSH client type: native
	I1123 07:56:32.901300 1043921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1123 07:56:32.901319 1043921 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 07:56:33.196247 1043921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 07:56:33.196285 1043921 machine.go:97] duration metric: took 4.088302801s to provisionDockerMachine
	I1123 07:56:33.196295 1043921 client.go:176] duration metric: took 12.353611625s to LocalClient.Create
	I1123 07:56:33.196318 1043921 start.go:167] duration metric: took 12.353678684s to libmachine.API.Create "addons-782760"
	I1123 07:56:33.196328 1043921 start.go:293] postStartSetup for "addons-782760" (driver="docker")
	I1123 07:56:33.196338 1043921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 07:56:33.196410 1043921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 07:56:33.196468 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:33.213220 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:33.318935 1043921 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 07:56:33.322075 1043921 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 07:56:33.322106 1043921 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 07:56:33.322118 1043921 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 07:56:33.322180 1043921 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 07:56:33.322206 1043921 start.go:296] duration metric: took 125.872398ms for postStartSetup
	I1123 07:56:33.322514 1043921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-782760
	I1123 07:56:33.338303 1043921 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/config.json ...
	I1123 07:56:33.338592 1043921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 07:56:33.338642 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:33.355626 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:33.456033 1043921 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 07:56:33.460611 1043921 start.go:128] duration metric: took 12.621639337s to createHost
	I1123 07:56:33.460678 1043921 start.go:83] releasing machines lock for "addons-782760", held for 12.621810114s
	I1123 07:56:33.460770 1043921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-782760
	I1123 07:56:33.476965 1043921 ssh_runner.go:195] Run: cat /version.json
	I1123 07:56:33.477007 1043921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 07:56:33.477014 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:33.477058 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:33.496736 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:33.498504 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:33.686990 1043921 ssh_runner.go:195] Run: systemctl --version
	I1123 07:56:33.692938 1043921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 07:56:33.726669 1043921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 07:56:33.730740 1043921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 07:56:33.730859 1043921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 07:56:33.757871 1043921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 07:56:33.757895 1043921 start.go:496] detecting cgroup driver to use...
	I1123 07:56:33.757926 1043921 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 07:56:33.757991 1043921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 07:56:33.773620 1043921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 07:56:33.785867 1043921 docker.go:218] disabling cri-docker service (if available) ...
	I1123 07:56:33.785938 1043921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 07:56:33.804563 1043921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 07:56:33.824032 1043921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 07:56:33.951213 1043921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 07:56:34.084749 1043921 docker.go:234] disabling docker service ...
	I1123 07:56:34.084867 1043921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 07:56:34.106694 1043921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 07:56:34.120039 1043921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 07:56:34.239852 1043921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 07:56:34.349893 1043921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 07:56:34.363689 1043921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 07:56:34.378257 1043921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 07:56:34.378360 1043921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.393557 1043921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 07:56:34.393643 1043921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.402830 1043921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.411906 1043921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.420533 1043921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 07:56:34.428687 1043921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.437290 1043921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.450409 1043921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.459977 1043921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 07:56:34.468697 1043921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 07:56:34.476335 1043921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 07:56:34.581247 1043921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 07:56:34.741518 1043921 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 07:56:34.741647 1043921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 07:56:34.745336 1043921 start.go:564] Will wait 60s for crictl version
	I1123 07:56:34.745443 1043921 ssh_runner.go:195] Run: which crictl
	I1123 07:56:34.749001 1043921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 07:56:34.776328 1043921 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 07:56:34.776484 1043921 ssh_runner.go:195] Run: crio --version
	I1123 07:56:34.804070 1043921 ssh_runner.go:195] Run: crio --version
	I1123 07:56:34.833325 1043921 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 07:56:34.836114 1043921 cli_runner.go:164] Run: docker network inspect addons-782760 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 07:56:34.851962 1043921 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 07:56:34.855507 1043921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 07:56:34.864631 1043921 kubeadm.go:884] updating cluster {Name:addons-782760 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-782760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 07:56:34.864753 1043921 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:56:34.864813 1043921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 07:56:34.904086 1043921 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 07:56:34.904110 1043921 crio.go:433] Images already preloaded, skipping extraction
	I1123 07:56:34.904168 1043921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 07:56:34.929073 1043921 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 07:56:34.929108 1043921 cache_images.go:86] Images are preloaded, skipping loading
	I1123 07:56:34.929116 1043921 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 07:56:34.929217 1043921 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-782760 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-782760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 07:56:34.929305 1043921 ssh_runner.go:195] Run: crio config
	I1123 07:56:34.980976 1043921 cni.go:84] Creating CNI manager for ""
	I1123 07:56:34.981000 1043921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:56:34.981015 1043921 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 07:56:34.981068 1043921 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-782760 NodeName:addons-782760 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 07:56:34.981209 1043921 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-782760"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 07:56:34.981286 1043921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 07:56:34.988859 1043921 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 07:56:34.988936 1043921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 07:56:34.996156 1043921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 07:56:35.012888 1043921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 07:56:35.026835 1043921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1123 07:56:35.039308 1043921 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 07:56:35.042791 1043921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 07:56:35.051959 1043921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 07:56:35.159384 1043921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 07:56:35.174884 1043921 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760 for IP: 192.168.49.2
	I1123 07:56:35.174906 1043921 certs.go:195] generating shared ca certs ...
	I1123 07:56:35.174923 1043921 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:35.175132 1043921 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 07:56:35.444177 1043921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt ...
	I1123 07:56:35.444210 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt: {Name:mk8146c5b7a605f779e320eb84a5cb2ea564082b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:35.444448 1043921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key ...
	I1123 07:56:35.444465 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key: {Name:mk26f3ffa20a6bcc50ae913917776508521cc9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:35.444589 1043921 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 07:56:35.756453 1043921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt ...
	I1123 07:56:35.756482 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt: {Name:mk1f883bd52c353a0d324bd09106e5a1dc14c56c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:35.756659 1043921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key ...
	I1123 07:56:35.756672 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key: {Name:mke4c88318427a2ef42dd51a08bdffba43aefe94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:35.756753 1043921 certs.go:257] generating profile certs ...
	I1123 07:56:35.756822 1043921 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.key
	I1123 07:56:35.756838 1043921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt with IP's: []
	I1123 07:56:36.045925 1043921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt ...
	I1123 07:56:36.045969 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: {Name:mk7dc761132cd3836da2c08a7038d07c60f4df22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:36.046163 1043921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.key ...
	I1123 07:56:36.046176 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.key: {Name:mka496c926ea8fd6d350fdb7fa6c05066bc5e55d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:36.046262 1043921 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.key.5e94d694
	I1123 07:56:36.046283 1043921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.crt.5e94d694 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1123 07:56:36.205078 1043921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.crt.5e94d694 ...
	I1123 07:56:36.205109 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.crt.5e94d694: {Name:mk8a13a122175f0ddb1281d41cffd2c533aaf4b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:36.205294 1043921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.key.5e94d694 ...
	I1123 07:56:36.205311 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.key.5e94d694: {Name:mk8bba5e6439cf832dddfbbf160c0063b04ad5f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:36.205410 1043921 certs.go:382] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.crt.5e94d694 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.crt
	I1123 07:56:36.205523 1043921 certs.go:386] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.key.5e94d694 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.key
	I1123 07:56:36.205580 1043921 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.key
	I1123 07:56:36.205600 1043921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.crt with IP's: []
	I1123 07:56:36.275741 1043921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.crt ...
	I1123 07:56:36.275770 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.crt: {Name:mka7718c2545da001702f275c6eea0267d39520a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:36.275939 1043921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.key ...
	I1123 07:56:36.275951 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.key: {Name:mkdccc58ebfd80aafabea53e5a76b2198b113569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:36.276137 1043921 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 07:56:36.276179 1043921 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 07:56:36.276208 1043921 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 07:56:36.276243 1043921 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 07:56:36.276779 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 07:56:36.295448 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 07:56:36.313203 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 07:56:36.330320 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 07:56:36.347226 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 07:56:36.364044 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 07:56:36.380872 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 07:56:36.398081 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 07:56:36.415480 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 07:56:36.432775 1043921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 07:56:36.445691 1043921 ssh_runner.go:195] Run: openssl version
	I1123 07:56:36.451981 1043921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 07:56:36.460479 1043921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 07:56:36.464420 1043921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 07:56:36.464540 1043921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 07:56:36.505058 1043921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 07:56:36.513709 1043921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 07:56:36.517296 1043921 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 07:56:36.517349 1043921 kubeadm.go:401] StartCluster: {Name:addons-782760 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-782760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 07:56:36.517435 1043921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:56:36.517495 1043921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:56:36.543266 1043921 cri.go:89] found id: ""
	I1123 07:56:36.543337 1043921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 07:56:36.550854 1043921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 07:56:36.558218 1043921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 07:56:36.558325 1043921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 07:56:36.565691 1043921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 07:56:36.565710 1043921 kubeadm.go:158] found existing configuration files:
	
	I1123 07:56:36.565759 1043921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 07:56:36.573169 1043921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 07:56:36.573262 1043921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 07:56:36.580328 1043921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 07:56:36.588003 1043921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 07:56:36.588173 1043921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 07:56:36.595677 1043921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 07:56:36.602945 1043921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 07:56:36.603058 1043921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 07:56:36.610552 1043921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 07:56:36.617998 1043921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 07:56:36.618122 1043921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 07:56:36.626644 1043921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 07:56:36.679608 1043921 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 07:56:36.679669 1043921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 07:56:36.703818 1043921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 07:56:36.703898 1043921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 07:56:36.703938 1043921 kubeadm.go:319] OS: Linux
	I1123 07:56:36.703988 1043921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 07:56:36.704041 1043921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 07:56:36.704098 1043921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 07:56:36.704151 1043921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 07:56:36.704203 1043921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 07:56:36.704262 1043921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 07:56:36.704312 1043921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 07:56:36.704364 1043921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 07:56:36.704415 1043921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 07:56:36.770028 1043921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 07:56:36.770235 1043921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 07:56:36.770375 1043921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 07:56:36.777168 1043921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 07:56:36.783796 1043921 out.go:252]   - Generating certificates and keys ...
	I1123 07:56:36.783902 1043921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 07:56:36.783974 1043921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 07:56:38.594555 1043921 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 07:56:39.216478 1043921 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 07:56:39.555558 1043921 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 07:56:39.941644 1043921 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 07:56:40.768461 1043921 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 07:56:40.768639 1043921 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-782760 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 07:56:41.113559 1043921 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 07:56:41.113914 1043921 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-782760 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 07:56:41.500982 1043921 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 07:56:41.616936 1043921 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 07:56:42.411413 1043921 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 07:56:42.411714 1043921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 07:56:42.558575 1043921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 07:56:42.996050 1043921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 07:56:43.282564 1043921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 07:56:44.031592 1043921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 07:56:44.883966 1043921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 07:56:44.884921 1043921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 07:56:44.888862 1043921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 07:56:44.892254 1043921 out.go:252]   - Booting up control plane ...
	I1123 07:56:44.892365 1043921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 07:56:44.892443 1043921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 07:56:44.893349 1043921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 07:56:44.912506 1043921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 07:56:44.912880 1043921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 07:56:44.920000 1043921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 07:56:44.920316 1043921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 07:56:44.920570 1043921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 07:56:45.059289 1043921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 07:56:45.059435 1043921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 07:56:46.560444 1043921 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501700148s
	I1123 07:56:46.571397 1043921 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 07:56:46.571494 1043921 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1123 07:56:46.571583 1043921 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 07:56:46.571668 1043921 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 07:56:49.755635 1043921 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.183997939s
	I1123 07:56:51.224076 1043921 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.652670518s
	I1123 07:56:52.573664 1043921 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002063363s
	I1123 07:56:52.594117 1043921 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 07:56:52.607806 1043921 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 07:56:52.620157 1043921 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 07:56:52.620364 1043921 kubeadm.go:319] [mark-control-plane] Marking the node addons-782760 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 07:56:52.632440 1043921 kubeadm.go:319] [bootstrap-token] Using token: 1t27ze.71y3zo3jsbxnoaq7
	I1123 07:56:52.637420 1043921 out.go:252]   - Configuring RBAC rules ...
	I1123 07:56:52.637550 1043921 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 07:56:52.641780 1043921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 07:56:52.649281 1043921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 07:56:52.652947 1043921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 07:56:52.657121 1043921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 07:56:52.663308 1043921 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 07:56:52.981051 1043921 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 07:56:53.413290 1043921 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 07:56:53.980493 1043921 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 07:56:53.981488 1043921 kubeadm.go:319] 
	I1123 07:56:53.981557 1043921 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 07:56:53.981562 1043921 kubeadm.go:319] 
	I1123 07:56:53.981639 1043921 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 07:56:53.981643 1043921 kubeadm.go:319] 
	I1123 07:56:53.981667 1043921 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 07:56:53.981726 1043921 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 07:56:53.981777 1043921 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 07:56:53.981780 1043921 kubeadm.go:319] 
	I1123 07:56:53.981834 1043921 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 07:56:53.981838 1043921 kubeadm.go:319] 
	I1123 07:56:53.981886 1043921 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 07:56:53.981891 1043921 kubeadm.go:319] 
	I1123 07:56:53.981943 1043921 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 07:56:53.982018 1043921 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 07:56:53.982086 1043921 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 07:56:53.982090 1043921 kubeadm.go:319] 
	I1123 07:56:53.982184 1043921 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 07:56:53.982273 1043921 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 07:56:53.982278 1043921 kubeadm.go:319] 
	I1123 07:56:53.982361 1043921 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1t27ze.71y3zo3jsbxnoaq7 \
	I1123 07:56:53.982464 1043921 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb \
	I1123 07:56:53.982484 1043921 kubeadm.go:319] 	--control-plane 
	I1123 07:56:53.982488 1043921 kubeadm.go:319] 
	I1123 07:56:53.982572 1043921 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 07:56:53.982576 1043921 kubeadm.go:319] 
	I1123 07:56:53.982658 1043921 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1t27ze.71y3zo3jsbxnoaq7 \
	I1123 07:56:53.983019 1043921 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb 
	I1123 07:56:53.986842 1043921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 07:56:53.987075 1043921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 07:56:53.987207 1043921 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 07:56:53.987220 1043921 cni.go:84] Creating CNI manager for ""
	I1123 07:56:53.987227 1043921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:56:53.992309 1043921 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 07:56:53.995328 1043921 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 07:56:53.999064 1043921 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 07:56:53.999082 1043921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 07:56:54.015309 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 07:56:54.284246 1043921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 07:56:54.284378 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:54.284472 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-782760 minikube.k8s.io/updated_at=2025_11_23T07_56_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=addons-782760 minikube.k8s.io/primary=true
	I1123 07:56:54.300757 1043921 ops.go:34] apiserver oom_adj: -16
	I1123 07:56:54.440806 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:54.940988 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:55.440989 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:55.940943 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:56.441143 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:56.940876 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:57.441705 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:57.941031 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:58.441013 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:58.941144 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:59.042917 1043921 kubeadm.go:1114] duration metric: took 4.758581917s to wait for elevateKubeSystemPrivileges
	I1123 07:56:59.042947 1043921 kubeadm.go:403] duration metric: took 22.525601601s to StartCluster
	I1123 07:56:59.042964 1043921 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:59.043085 1043921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 07:56:59.043492 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:59.043686 1043921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 07:56:59.043706 1043921 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 07:56:59.043988 1043921 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:56:59.044039 1043921 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 07:56:59.044138 1043921 addons.go:70] Setting yakd=true in profile "addons-782760"
	I1123 07:56:59.044152 1043921 addons.go:239] Setting addon yakd=true in "addons-782760"
	I1123 07:56:59.044173 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.044711 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.045092 1043921 addons.go:70] Setting metrics-server=true in profile "addons-782760"
	I1123 07:56:59.045114 1043921 addons.go:239] Setting addon metrics-server=true in "addons-782760"
	I1123 07:56:59.045137 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.045557 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.045685 1043921 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-782760"
	I1123 07:56:59.045697 1043921 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-782760"
	I1123 07:56:59.045715 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.046117 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.048274 1043921 addons.go:70] Setting registry=true in profile "addons-782760"
	I1123 07:56:59.048306 1043921 addons.go:239] Setting addon registry=true in "addons-782760"
	I1123 07:56:59.048457 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.048468 1043921 addons.go:70] Setting cloud-spanner=true in profile "addons-782760"
	I1123 07:56:59.048492 1043921 addons.go:239] Setting addon cloud-spanner=true in "addons-782760"
	I1123 07:56:59.048525 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.049025 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.048453 1043921 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-782760"
	I1123 07:56:59.049232 1043921 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-782760"
	I1123 07:56:59.049257 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.049726 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.052089 1043921 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-782760"
	I1123 07:56:59.052158 1043921 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-782760"
	I1123 07:56:59.052192 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.052693 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.053653 1043921 addons.go:70] Setting registry-creds=true in profile "addons-782760"
	I1123 07:56:59.053675 1043921 addons.go:239] Setting addon registry-creds=true in "addons-782760"
	I1123 07:56:59.053706 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.054279 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.056588 1043921 addons.go:70] Setting default-storageclass=true in profile "addons-782760"
	I1123 07:56:59.056627 1043921 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-782760"
	I1123 07:56:59.056969 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.063908 1043921 addons.go:70] Setting storage-provisioner=true in profile "addons-782760"
	I1123 07:56:59.063948 1043921 addons.go:239] Setting addon storage-provisioner=true in "addons-782760"
	I1123 07:56:59.063982 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.064458 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.066722 1043921 addons.go:70] Setting gcp-auth=true in profile "addons-782760"
	I1123 07:56:59.066759 1043921 mustload.go:66] Loading cluster: addons-782760
	I1123 07:56:59.066992 1043921 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:56:59.067356 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.079370 1043921 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-782760"
	I1123 07:56:59.079418 1043921 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-782760"
	I1123 07:56:59.079901 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.103581 1043921 addons.go:70] Setting ingress=true in profile "addons-782760"
	I1123 07:56:59.103626 1043921 addons.go:239] Setting addon ingress=true in "addons-782760"
	I1123 07:56:59.103675 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.104254 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.112593 1043921 addons.go:70] Setting volcano=true in profile "addons-782760"
	I1123 07:56:59.112638 1043921 addons.go:239] Setting addon volcano=true in "addons-782760"
	I1123 07:56:59.112675 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.114304 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.128263 1043921 addons.go:70] Setting ingress-dns=true in profile "addons-782760"
	I1123 07:56:59.128307 1043921 addons.go:239] Setting addon ingress-dns=true in "addons-782760"
	I1123 07:56:59.128381 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.128940 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.145400 1043921 addons.go:70] Setting volumesnapshots=true in profile "addons-782760"
	I1123 07:56:59.145437 1043921 addons.go:239] Setting addon volumesnapshots=true in "addons-782760"
	I1123 07:56:59.145475 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.145937 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.155564 1043921 addons.go:70] Setting inspektor-gadget=true in profile "addons-782760"
	I1123 07:56:59.155672 1043921 addons.go:239] Setting addon inspektor-gadget=true in "addons-782760"
	I1123 07:56:59.155749 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.156498 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.179228 1043921 out.go:179] * Verifying Kubernetes components...
	I1123 07:56:59.182815 1043921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 07:56:59.184035 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.282310 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 07:56:59.317509 1043921 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1123 07:56:59.326693 1043921 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 07:56:59.326717 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 07:56:59.326795 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.369193 1043921 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 07:56:59.369368 1043921 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 07:56:59.369516 1043921 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 07:56:59.369703 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 07:56:59.381259 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.382782 1043921 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 07:56:59.382845 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 07:56:59.382923 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.386213 1043921 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 07:56:59.386232 1043921 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 07:56:59.386300 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.371527 1043921 addons.go:239] Setting addon default-storageclass=true in "addons-782760"
	I1123 07:56:59.394417 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.394974 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.403672 1043921 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-782760"
	I1123 07:56:59.403767 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.404288 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.416140 1043921 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 07:56:59.416307 1043921 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 07:56:59.416472 1043921 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	W1123 07:56:59.392491 1043921 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 07:56:59.392074 1043921 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 07:56:59.428029 1043921 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 07:56:59.428102 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.429590 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 07:56:59.429598 1043921 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 07:56:59.429749 1043921 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 07:56:59.444513 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 07:56:59.444586 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.454966 1043921 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 07:56:59.456941 1043921 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 07:56:59.457227 1043921 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 07:56:59.457272 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 07:56:59.457358 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.441545 1043921 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 07:56:59.470004 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 07:56:59.470079 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.482696 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 07:56:59.483631 1043921 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 07:56:59.483649 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 07:56:59.483749 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.490351 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 07:56:59.494021 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 07:56:59.497188 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 07:56:59.497336 1043921 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 07:56:59.498545 1043921 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 07:56:59.498563 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 07:56:59.498627 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.520724 1043921 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 07:56:59.522793 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 07:56:59.529560 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 07:56:59.531381 1043921 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 07:56:59.531417 1043921 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 07:56:59.531490 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.547350 1043921 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 07:56:59.557306 1043921 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 07:56:59.558454 1043921 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 07:56:59.559341 1043921 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 07:56:59.559359 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 07:56:59.559510 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.564794 1043921 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 07:56:59.564813 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 07:56:59.564949 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.548262 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 07:56:59.584788 1043921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 07:56:59.587274 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.602981 1043921 out.go:179]   - Using image docker.io/busybox:stable
	I1123 07:56:59.609456 1043921 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 07:56:59.609486 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 07:56:59.609546 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.631057 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.634106 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.634805 1043921 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 07:56:59.634818 1043921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 07:56:59.634948 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.704609 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.721324 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.742816 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.767819 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.770671 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.794699 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.797100 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.804811 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.822613 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	W1123 07:56:59.827754 1043921 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 07:56:59.827871 1043921 retry.go:31] will retry after 218.284426ms: ssh: handshake failed: EOF
	I1123 07:56:59.837970 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.847888 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	W1123 07:56:59.855465 1043921 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 07:56:59.855564 1043921 retry.go:31] will retry after 171.88141ms: ssh: handshake failed: EOF
	I1123 07:56:59.857529 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.859058 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.862179 1043921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 07:56:59.862427 1043921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 07:57:00.450521 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 07:57:00.547480 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 07:57:00.588696 1043921 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 07:57:00.588767 1043921 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 07:57:00.620644 1043921 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 07:57:00.620722 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 07:57:00.640541 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 07:57:00.668490 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 07:57:00.675233 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 07:57:00.675604 1043921 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 07:57:00.675653 1043921 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 07:57:00.733635 1043921 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 07:57:00.733714 1043921 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 07:57:00.738056 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 07:57:00.756322 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 07:57:00.765253 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 07:57:00.767441 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 07:57:00.768541 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 07:57:00.768595 1043921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 07:57:00.776349 1043921 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 07:57:00.776420 1043921 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 07:57:00.843963 1043921 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 07:57:00.844039 1043921 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 07:57:00.847004 1043921 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 07:57:00.847071 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 07:57:00.912099 1043921 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 07:57:00.912181 1043921 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 07:57:00.928887 1043921 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 07:57:00.928966 1043921 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 07:57:00.940786 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 07:57:00.978722 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 07:57:00.978811 1043921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 07:57:01.048550 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 07:57:01.052140 1043921 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 07:57:01.052213 1043921 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 07:57:01.067780 1043921 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 07:57:01.067849 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 07:57:01.097329 1043921 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 07:57:01.097362 1043921 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 07:57:01.121860 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 07:57:01.121886 1043921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 07:57:01.178716 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 07:57:01.215248 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 07:57:01.248929 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 07:57:01.249009 1043921 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 07:57:01.289283 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 07:57:01.289365 1043921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 07:57:01.498155 1043921 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 07:57:01.498228 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 07:57:01.537740 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 07:57:01.537813 1043921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 07:57:01.642949 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 07:57:01.786201 1043921 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.923993159s)
	I1123 07:57:01.786160 1043921 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.923682768s)
	I1123 07:57:01.786380 1043921 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1123 07:57:01.787694 1043921 node_ready.go:35] waiting up to 6m0s for node "addons-782760" to be "Ready" ...
	I1123 07:57:01.801880 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.351251344s)
	I1123 07:57:01.817884 1043921 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 07:57:01.817908 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 07:57:02.046481 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.498891038s)
	I1123 07:57:02.225103 1043921 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 07:57:02.225125 1043921 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 07:57:02.294076 1043921 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-782760" context rescaled to 1 replicas
	I1123 07:57:02.436624 1043921 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 07:57:02.436648 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 07:57:02.633303 1043921 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 07:57:02.633325 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 07:57:02.742067 1043921 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 07:57:02.742090 1043921 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 07:57:03.037795 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1123 07:57:03.800096 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:04.991087 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.350457894s)
	I1123 07:57:04.991118 1043921 addons.go:495] Verifying addon ingress=true in "addons-782760"
	I1123 07:57:04.991325 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.32274187s)
	I1123 07:57:04.991405 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.316099976s)
	I1123 07:57:04.991432 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.253311428s)
	I1123 07:57:04.991509 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.235119054s)
	I1123 07:57:04.991617 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.224110819s)
	I1123 07:57:04.991645 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.05079972s)
	I1123 07:57:04.991843 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.943229699s)
	I1123 07:57:04.991862 1043921 addons.go:495] Verifying addon registry=true in "addons-782760"
	I1123 07:57:04.991954 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.22623811s)
	I1123 07:57:04.992114 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.813376057s)
	I1123 07:57:04.992128 1043921 addons.go:495] Verifying addon metrics-server=true in "addons-782760"
	I1123 07:57:04.992265 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.776941824s)
	I1123 07:57:04.995289 1043921 out.go:179] * Verifying ingress addon...
	I1123 07:57:04.997307 1043921 out.go:179] * Verifying registry addon...
	I1123 07:57:04.997397 1043921 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-782760 service yakd-dashboard -n yakd-dashboard
	
	I1123 07:57:04.999970 1043921 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1123 07:57:05.001838 1043921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 07:57:05.053636 1043921 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 07:57:05.053701 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:05.054005 1043921 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 07:57:05.054045 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:57:05.069493 1043921 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1123 07:57:05.171825 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.528793554s)
	W1123 07:57:05.171859 1043921 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 07:57:05.171879 1043921 retry.go:31] will retry after 127.667831ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 07:57:05.299694 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 07:57:05.510580 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:05.511011 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:05.755978 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.718136521s)
	I1123 07:57:05.756051 1043921 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-782760"
	I1123 07:57:05.759277 1043921 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 07:57:05.762874 1043921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 07:57:05.768093 1043921 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 07:57:05.768153 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:06.008555 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:06.011387 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:06.266012 1043921 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 07:57:06.266036 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:06.291136 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:06.504254 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:06.505412 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:06.766878 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:06.992164 1043921 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 07:57:06.992267 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:57:07.008870 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:07.008934 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:07.013494 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:57:07.132846 1043921 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 07:57:07.145411 1043921 addons.go:239] Setting addon gcp-auth=true in "addons-782760"
	I1123 07:57:07.145459 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:57:07.145933 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:57:07.162851 1043921 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 07:57:07.162903 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:57:07.180029 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:57:07.266393 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:07.503360 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:07.505269 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:07.766092 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:08.005617 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:08.008574 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:08.144061 1043921 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 07:57:08.144175 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.844392662s)
	I1123 07:57:08.149881 1043921 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 07:57:08.152773 1043921 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 07:57:08.152799 1043921 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 07:57:08.166304 1043921 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 07:57:08.166330 1043921 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 07:57:08.182113 1043921 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 07:57:08.182136 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 07:57:08.195002 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 07:57:08.266904 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:08.292081 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:08.505583 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:08.506738 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:08.684026 1043921 addons.go:495] Verifying addon gcp-auth=true in "addons-782760"
	I1123 07:57:08.686688 1043921 out.go:179] * Verifying gcp-auth addon...
	I1123 07:57:08.689327 1043921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 07:57:08.698242 1043921 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 07:57:08.698268 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:08.796379 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:09.004443 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:09.006276 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:09.192287 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:09.266021 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:09.503528 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:09.505578 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:09.692083 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:09.765781 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:10.007258 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:10.011109 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:10.192695 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:10.266730 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:10.292645 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:10.504082 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:10.504817 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:10.692919 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:10.766708 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:11.005085 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:11.007933 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:11.192961 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:11.265809 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:11.504434 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:11.504633 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:11.693091 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:11.765849 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:12.005227 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:12.008285 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:12.193121 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:12.266173 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:12.503870 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:12.505338 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:12.692096 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:12.765928 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:12.790521 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:13.003599 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:13.006200 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:13.192999 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:13.266532 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:13.503834 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:13.504883 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:13.693057 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:13.765956 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:14.004916 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:14.006871 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:14.192612 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:14.266453 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:14.503956 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:14.504148 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:14.692901 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:14.765837 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:14.790688 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:15.006954 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:15.008291 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:15.192123 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:15.266056 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:15.503204 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:15.505560 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:15.692418 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:15.766065 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:16.008026 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:16.008507 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:16.192216 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:16.265904 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:16.504497 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:16.505209 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:16.692978 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:16.767120 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:16.791046 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:17.003103 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:17.006047 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:17.192955 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:17.265851 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:17.504108 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:17.504331 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:17.692431 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:17.769818 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:18.010590 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:18.011495 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:18.192466 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:18.266402 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:18.505266 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:18.505713 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:18.692488 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:18.766349 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:18.791095 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:19.003291 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:19.005879 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:19.192703 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:19.266545 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:19.504066 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:19.505238 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:19.693227 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:19.765893 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:20.011222 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:20.012050 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:20.193295 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:20.266162 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:20.503951 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:20.505298 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:20.692129 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:20.765779 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:21.005619 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:21.007142 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:21.192140 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:21.265675 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:21.291129 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:21.504762 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:21.506053 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:21.693208 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:21.765998 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:22.006192 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:22.007159 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:22.192898 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:22.265726 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:22.504245 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:22.504987 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:22.692841 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:22.765991 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:23.003364 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:23.006229 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:23.192239 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:23.267088 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:23.291670 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:23.504401 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:23.504601 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:23.692649 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:23.766311 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:24.003080 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:24.007694 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:24.192485 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:24.266218 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:24.503943 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:24.505780 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:24.700212 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:24.765775 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:25.013224 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:25.015280 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:25.193060 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:25.265692 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:25.503901 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:25.504695 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:25.692605 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:25.766200 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:25.791415 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:26.004878 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:26.008420 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:26.192044 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:26.265950 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:26.505520 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:26.505839 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:26.692981 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:26.766981 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:27.005427 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:27.007530 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:27.192724 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:27.266688 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:27.502970 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:27.504768 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:27.692525 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:27.770940 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:28.004937 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:28.007006 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:28.192470 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:28.267912 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:28.290755 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:28.504767 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:28.505196 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:28.693032 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:28.766128 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:29.004052 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:29.006348 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:29.192689 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:29.266881 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:29.503997 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:29.504592 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:29.692519 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:29.766479 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:30.003251 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:30.010371 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:30.193080 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:30.265954 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:30.291420 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:30.503551 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:30.504448 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:30.692499 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:30.765993 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:31.005780 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:31.007975 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:31.193145 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:31.265624 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:31.503760 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:31.504230 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:31.691902 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:31.766572 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:32.003553 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:32.009303 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:32.192108 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:32.266836 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:32.503352 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:32.505077 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:32.693126 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:32.765830 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:32.790474 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:33.005642 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:33.005806 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:33.193081 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:33.266298 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:33.505639 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:33.506155 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:33.693110 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:33.765644 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:34.008194 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:34.008309 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:34.192796 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:34.266439 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:34.504170 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:34.504616 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:34.693368 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:34.766034 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:34.790847 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:35.002897 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:35.005434 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:35.192349 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:35.265957 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:35.504190 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:35.505087 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:35.692856 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:35.766894 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:36.007801 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:36.013446 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:36.192168 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:36.265863 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:36.503405 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:36.504591 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:36.692449 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:36.765954 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:37.006607 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:37.008111 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:37.192255 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:37.266199 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:37.291971 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:37.502958 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:37.505208 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:37.693152 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:37.765687 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:38.010386 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:38.012567 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:38.192834 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:38.267005 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:38.503225 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:38.505085 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:38.697455 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:38.766165 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:39.003042 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:39.006289 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:39.192428 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:39.266282 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:39.503288 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:39.505046 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:39.692797 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:39.794000 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:39.817785 1043921 node_ready.go:49] node "addons-782760" is "Ready"
	I1123 07:57:39.817864 1043921 node_ready.go:38] duration metric: took 38.030132168s for node "addons-782760" to be "Ready" ...
	I1123 07:57:39.817891 1043921 api_server.go:52] waiting for apiserver process to appear ...
	I1123 07:57:39.817977 1043921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 07:57:39.846505 1043921 api_server.go:72] duration metric: took 40.802768186s to wait for apiserver process to appear ...
	I1123 07:57:39.846530 1043921 api_server.go:88] waiting for apiserver healthz status ...
	I1123 07:57:39.846548 1043921 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 07:57:39.862981 1043921 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 07:57:39.877943 1043921 api_server.go:141] control plane version: v1.34.1
	I1123 07:57:39.877971 1043921 api_server.go:131] duration metric: took 31.435147ms to wait for apiserver health ...
	I1123 07:57:39.877980 1043921 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 07:57:39.890453 1043921 system_pods.go:59] 19 kube-system pods found
	I1123 07:57:39.890487 1043921 system_pods.go:61] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Pending
	I1123 07:57:39.890493 1043921 system_pods.go:61] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending
	I1123 07:57:39.890497 1043921 system_pods.go:61] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending
	I1123 07:57:39.890501 1043921 system_pods.go:61] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending
	I1123 07:57:39.890505 1043921 system_pods.go:61] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:39.890508 1043921 system_pods.go:61] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:39.890512 1043921 system_pods.go:61] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:39.890515 1043921 system_pods.go:61] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:39.890519 1043921 system_pods.go:61] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending
	I1123 07:57:39.890523 1043921 system_pods.go:61] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:39.890526 1043921 system_pods.go:61] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:39.890531 1043921 system_pods.go:61] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending
	I1123 07:57:39.890539 1043921 system_pods.go:61] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending
	I1123 07:57:39.890548 1043921 system_pods.go:61] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:39.890558 1043921 system_pods.go:61] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending
	I1123 07:57:39.890564 1043921 system_pods.go:61] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending
	I1123 07:57:39.890568 1043921 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending
	I1123 07:57:39.890571 1043921 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending
	I1123 07:57:39.890575 1043921 system_pods.go:61] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Pending
	I1123 07:57:39.890586 1043921 system_pods.go:74] duration metric: took 12.600255ms to wait for pod list to return data ...
	I1123 07:57:39.890593 1043921 default_sa.go:34] waiting for default service account to be created ...
	I1123 07:57:39.904168 1043921 default_sa.go:45] found service account: "default"
	I1123 07:57:39.904195 1043921 default_sa.go:55] duration metric: took 13.596412ms for default service account to be created ...
	I1123 07:57:39.904205 1043921 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 07:57:39.911954 1043921 system_pods.go:86] 19 kube-system pods found
	I1123 07:57:39.911989 1043921 system_pods.go:89] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Pending
	I1123 07:57:39.911996 1043921 system_pods.go:89] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending
	I1123 07:57:39.912000 1043921 system_pods.go:89] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending
	I1123 07:57:39.912003 1043921 system_pods.go:89] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending
	I1123 07:57:39.912007 1043921 system_pods.go:89] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:39.912012 1043921 system_pods.go:89] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:39.912017 1043921 system_pods.go:89] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:39.912021 1043921 system_pods.go:89] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:39.912025 1043921 system_pods.go:89] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending
	I1123 07:57:39.912029 1043921 system_pods.go:89] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:39.912033 1043921 system_pods.go:89] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:39.912041 1043921 system_pods.go:89] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending
	I1123 07:57:39.912046 1043921 system_pods.go:89] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending
	I1123 07:57:39.912055 1043921 system_pods.go:89] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:39.912059 1043921 system_pods.go:89] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending
	I1123 07:57:39.912072 1043921 system_pods.go:89] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending
	I1123 07:57:39.912077 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending
	I1123 07:57:39.912081 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending
	I1123 07:57:39.912091 1043921 system_pods.go:89] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Pending
	I1123 07:57:39.912106 1043921 retry.go:31] will retry after 274.40814ms: missing components: kube-dns
	I1123 07:57:40.013809 1043921 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 07:57:40.013904 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:40.014948 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:40.235451 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:40.241095 1043921 system_pods.go:86] 19 kube-system pods found
	I1123 07:57:40.241182 1043921 system_pods.go:89] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:57:40.241204 1043921 system_pods.go:89] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending
	I1123 07:57:40.241225 1043921 system_pods.go:89] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending
	I1123 07:57:40.241257 1043921 system_pods.go:89] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending
	I1123 07:57:40.241279 1043921 system_pods.go:89] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:40.241299 1043921 system_pods.go:89] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:40.241318 1043921 system_pods.go:89] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:40.241355 1043921 system_pods.go:89] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:40.241374 1043921 system_pods.go:89] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending
	I1123 07:57:40.241392 1043921 system_pods.go:89] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:40.241426 1043921 system_pods.go:89] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:40.241452 1043921 system_pods.go:89] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:57:40.241472 1043921 system_pods.go:89] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending
	I1123 07:57:40.241507 1043921 system_pods.go:89] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:40.241530 1043921 system_pods.go:89] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending
	I1123 07:57:40.241551 1043921 system_pods.go:89] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:57:40.241584 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending
	I1123 07:57:40.241611 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:40.241631 1043921 system_pods.go:89] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Pending
	I1123 07:57:40.241678 1043921 retry.go:31] will retry after 358.244102ms: missing components: kube-dns
	I1123 07:57:40.274512 1043921 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 07:57:40.274583 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:40.508900 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:40.509005 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:40.611043 1043921 system_pods.go:86] 19 kube-system pods found
	I1123 07:57:40.611128 1043921 system_pods.go:89] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:57:40.611153 1043921 system_pods.go:89] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 07:57:40.611203 1043921 system_pods.go:89] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 07:57:40.611231 1043921 system_pods.go:89] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 07:57:40.611251 1043921 system_pods.go:89] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:40.611285 1043921 system_pods.go:89] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:40.611305 1043921 system_pods.go:89] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:40.611323 1043921 system_pods.go:89] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:40.611346 1043921 system_pods.go:89] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:57:40.611380 1043921 system_pods.go:89] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:40.611398 1043921 system_pods.go:89] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:40.611419 1043921 system_pods.go:89] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:57:40.611454 1043921 system_pods.go:89] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 07:57:40.611480 1043921 system_pods.go:89] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:40.611502 1043921 system_pods.go:89] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:57:40.611538 1043921 system_pods.go:89] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:57:40.611561 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:40.611585 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:40.611620 1043921 system_pods.go:89] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 07:57:40.611741 1043921 retry.go:31] will retry after 397.988495ms: missing components: kube-dns
	I1123 07:57:40.710088 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:40.811710 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:41.009441 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:41.009848 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:41.024021 1043921 system_pods.go:86] 19 kube-system pods found
	I1123 07:57:41.024102 1043921 system_pods.go:89] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:57:41.024126 1043921 system_pods.go:89] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 07:57:41.024164 1043921 system_pods.go:89] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 07:57:41.024190 1043921 system_pods.go:89] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 07:57:41.024210 1043921 system_pods.go:89] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:41.024244 1043921 system_pods.go:89] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:41.024267 1043921 system_pods.go:89] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:41.024285 1043921 system_pods.go:89] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:41.024322 1043921 system_pods.go:89] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:57:41.024343 1043921 system_pods.go:89] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:41.024362 1043921 system_pods.go:89] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:41.024397 1043921 system_pods.go:89] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:57:41.024422 1043921 system_pods.go:89] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 07:57:41.024444 1043921 system_pods.go:89] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:41.024478 1043921 system_pods.go:89] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:57:41.024503 1043921 system_pods.go:89] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:57:41.024524 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:41.024561 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:41.024586 1043921 system_pods.go:89] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 07:57:41.024618 1043921 retry.go:31] will retry after 480.908132ms: missing components: kube-dns
	I1123 07:57:41.192539 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:41.266524 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:41.503954 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:41.506405 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:41.510359 1043921 system_pods.go:86] 19 kube-system pods found
	I1123 07:57:41.510445 1043921 system_pods.go:89] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:57:41.510471 1043921 system_pods.go:89] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 07:57:41.510509 1043921 system_pods.go:89] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 07:57:41.510537 1043921 system_pods.go:89] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 07:57:41.510561 1043921 system_pods.go:89] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:41.510600 1043921 system_pods.go:89] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:41.510625 1043921 system_pods.go:89] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:41.510644 1043921 system_pods.go:89] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:41.510683 1043921 system_pods.go:89] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:57:41.510707 1043921 system_pods.go:89] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:41.510729 1043921 system_pods.go:89] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:41.510772 1043921 system_pods.go:89] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:57:41.510798 1043921 system_pods.go:89] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 07:57:41.510820 1043921 system_pods.go:89] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:41.510853 1043921 system_pods.go:89] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:57:41.510877 1043921 system_pods.go:89] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:57:41.510898 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:41.510933 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:41.510972 1043921 system_pods.go:89] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 07:57:41.511018 1043921 retry.go:31] will retry after 725.611233ms: missing components: kube-dns
	I1123 07:57:41.693152 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:41.794587 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:42.005316 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:42.008482 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:42.194280 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:42.296605 1043921 system_pods.go:86] 19 kube-system pods found
	I1123 07:57:42.296699 1043921 system_pods.go:89] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Running
	I1123 07:57:42.296727 1043921 system_pods.go:89] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 07:57:42.296772 1043921 system_pods.go:89] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 07:57:42.296801 1043921 system_pods.go:89] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 07:57:42.296820 1043921 system_pods.go:89] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:42.296855 1043921 system_pods.go:89] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:42.296880 1043921 system_pods.go:89] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:42.296901 1043921 system_pods.go:89] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:42.296939 1043921 system_pods.go:89] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:57:42.296961 1043921 system_pods.go:89] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:42.296981 1043921 system_pods.go:89] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:42.297022 1043921 system_pods.go:89] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:57:42.297054 1043921 system_pods.go:89] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 07:57:42.297001 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:42.297103 1043921 system_pods.go:89] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:42.297138 1043921 system_pods.go:89] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:57:42.297153 1043921 system_pods.go:89] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:57:42.297164 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:42.297175 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:42.297183 1043921 system_pods.go:89] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Running
	I1123 07:57:42.297193 1043921 system_pods.go:126] duration metric: took 2.392982357s to wait for k8s-apps to be running ...
	I1123 07:57:42.297223 1043921 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 07:57:42.297304 1043921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 07:57:42.313635 1043921 system_svc.go:56] duration metric: took 16.40013ms WaitForService to wait for kubelet
	I1123 07:57:42.313737 1043921 kubeadm.go:587] duration metric: took 43.270002452s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 07:57:42.313770 1043921 node_conditions.go:102] verifying NodePressure condition ...
	I1123 07:57:42.317004 1043921 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 07:57:42.317107 1043921 node_conditions.go:123] node cpu capacity is 2
	I1123 07:57:42.317139 1043921 node_conditions.go:105] duration metric: took 3.336832ms to run NodePressure ...
	I1123 07:57:42.317178 1043921 start.go:242] waiting for startup goroutines ...
	I1123 07:57:42.506487 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:42.506699 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:42.693266 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:42.766335 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:43.004337 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:43.007076 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:43.192171 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:43.266212 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:43.504063 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:43.505887 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:43.693684 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:43.794480 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:44.004638 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:44.005796 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:44.193494 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:44.267596 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:44.505688 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:44.505838 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:44.692653 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:44.767129 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:45.006820 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:45.009809 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:45.207981 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:45.270300 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:45.505325 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:45.505598 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:45.692758 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:45.766903 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:46.007564 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:46.008095 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:46.192839 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:46.266861 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:46.503711 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:46.504730 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:46.692678 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:46.767545 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:47.007408 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:47.007927 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:47.192967 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:47.265960 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:47.503274 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:47.505445 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:47.692946 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:47.793607 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:48.003821 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:48.006403 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:48.192416 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:48.266217 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:48.503793 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:48.505170 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:48.693582 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:48.766883 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:49.006458 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:49.008477 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:49.192702 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:49.266931 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:49.504084 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:49.505852 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:49.693847 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:49.794270 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:50.004719 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:50.016117 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:50.193402 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:50.266564 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:50.503674 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:50.505883 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:50.692756 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:50.766934 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:51.006749 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:51.006923 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:51.192979 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:51.270915 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:51.511633 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:51.512292 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:51.693613 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:51.795747 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:52.008771 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:52.009257 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:52.192948 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:52.266836 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:52.508743 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:52.509322 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:52.693954 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:52.769788 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:53.007734 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:53.007985 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:53.194600 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:53.295429 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:53.507502 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:53.508074 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:53.705484 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:53.787380 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:54.008236 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:54.008539 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:54.192634 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:54.266910 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:54.503538 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:54.505158 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:54.692427 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:54.776786 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:55.006055 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:55.012177 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:55.193425 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:55.266816 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:55.504533 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:55.507309 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:55.700234 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:55.767450 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:56.008054 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:56.008785 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:56.193583 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:56.266505 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:56.504800 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:56.504944 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:56.692869 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:56.767132 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:57.006802 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:57.006996 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:57.192287 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:57.266738 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:57.503501 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:57.505625 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:57.692613 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:57.767240 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:58.004708 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:58.008130 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:58.192930 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:58.266627 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:58.506222 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:58.506332 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:58.692530 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:58.767089 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:59.006579 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:59.009226 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:59.193005 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:59.266747 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:59.504843 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:59.505175 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:59.692534 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:59.767207 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:00.011958 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:00.032117 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:00.228389 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:00.281522 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:00.505899 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:00.506511 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:00.693439 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:00.767004 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:01.007027 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:01.008495 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:01.193122 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:01.267411 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:01.506231 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:01.508534 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:01.693491 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:01.768664 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:02.006370 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:02.008220 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:02.195407 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:02.267380 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:02.506478 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:02.506744 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:02.693372 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:02.766939 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:03.016414 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:03.023070 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:03.193829 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:03.295361 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:03.509233 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:03.509794 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:03.693836 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:03.766255 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:04.006571 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:04.007051 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:04.193400 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:04.266822 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:04.505579 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:04.507416 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:04.693185 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:04.766613 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:05.005762 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:05.008243 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:05.194037 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:05.266614 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:05.503758 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:05.505319 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:05.692337 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:05.766301 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:06.003406 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:06.007636 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:06.192968 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:06.266503 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:06.504864 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:06.505231 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:06.693689 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:06.766692 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:07.005469 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:07.007262 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:07.192727 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:07.267078 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:07.505479 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:07.506346 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:07.692449 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:07.766462 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:08.007721 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:08.007885 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:08.204862 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:08.265875 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:08.503932 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:08.506179 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:08.693110 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:08.767128 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:09.004316 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:09.007931 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:09.193532 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:09.267079 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:09.505825 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:09.506387 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:09.692696 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:09.767045 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:10.005477 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:10.007638 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:10.193955 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:10.267357 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:10.504838 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:10.507047 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:10.693231 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:10.766720 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:11.005961 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:11.007585 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:11.193189 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:11.267274 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:11.504759 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:11.505953 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:11.694265 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:11.767264 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:12.004826 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:12.008935 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:12.192923 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:12.266429 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:12.504133 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:12.512396 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:12.692438 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:12.766310 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:13.005240 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:13.006847 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:13.193237 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:13.266531 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:13.506188 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:13.506534 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:13.692776 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:13.766945 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:14.004941 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:14.006997 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:14.193274 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:14.266570 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:14.503423 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:14.505448 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:14.692580 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:14.766357 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:15.008405 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:15.008956 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:15.193413 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:15.266582 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:15.503947 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:15.505317 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:15.693120 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:15.766432 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:16.004420 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:16.017231 1043921 kapi.go:107] duration metric: took 1m11.015389153s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 07:58:16.192842 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:16.266907 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:16.503561 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:16.692831 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:16.766534 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:17.004724 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:17.193491 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:17.267095 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:17.503472 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:17.692353 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:17.766506 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:18.007834 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:18.192613 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:18.267348 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:18.503889 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:18.693000 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:18.773862 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:19.005588 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:19.193728 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:19.267526 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:19.504378 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:19.692636 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:19.767124 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:20.005069 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:20.193318 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:20.266663 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:20.503977 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:20.692810 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:20.766966 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:21.003379 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:21.193256 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:21.266972 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:21.503104 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:21.692530 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:21.766374 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:22.006986 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:22.193261 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:22.266698 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:22.504883 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:22.692861 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:22.765906 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:23.003132 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:23.192827 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:23.265582 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:23.504939 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:23.694242 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:23.766197 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:24.004431 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:24.193163 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:24.268575 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:24.504602 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:24.692881 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:24.766707 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:25.005405 1043921 kapi.go:107] duration metric: took 1m20.005430467s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 07:58:25.193448 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:25.268918 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:25.776010 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:25.776760 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:26.193022 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:26.266299 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:26.692285 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:26.766465 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:27.197813 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:27.267545 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:27.692954 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:27.767247 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:28.191946 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:28.275166 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:28.693761 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:28.775843 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:29.194368 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:29.267214 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:29.693315 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:29.767779 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:30.195975 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:30.268765 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:30.692535 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:30.766389 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:31.192830 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:31.265840 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:31.692971 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:31.766921 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:32.198486 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:32.293264 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:32.692362 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:32.766086 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:33.197505 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:33.266999 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:33.692058 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:33.766241 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:34.192804 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:34.266538 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:34.692130 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:34.766678 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:35.193571 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:35.266569 1043921 kapi.go:107] duration metric: took 1m29.503694816s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 07:58:35.692690 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:36.193234 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:36.692031 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:37.192808 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:37.693451 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:38.193969 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:38.692488 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:39.192945 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:39.692547 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:40.193175 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:40.693765 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:41.193075 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:41.695212 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:42.193150 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:42.692465 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:43.193262 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:43.693316 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:44.192690 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:44.692991 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:45.196723 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:45.693258 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:46.192524 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:46.692514 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:47.192969 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:47.692439 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:48.193597 1043921 kapi.go:107] duration metric: took 1m39.504270078s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 07:58:48.196576 1043921 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-782760 cluster.
	I1123 07:58:48.199418 1043921 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 07:58:48.202300 1043921 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 07:58:48.205000 1043921 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, amd-gpu-device-plugin, inspektor-gadget, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1123 07:58:48.207771 1043921 addons.go:530] duration metric: took 1m49.163731725s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner amd-gpu-device-plugin inspektor-gadget registry-creds metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1123 07:58:48.207823 1043921 start.go:247] waiting for cluster config update ...
	I1123 07:58:48.207863 1043921 start.go:256] writing updated cluster config ...
	I1123 07:58:48.208202 1043921 ssh_runner.go:195] Run: rm -f paused
	I1123 07:58:48.213178 1043921 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 07:58:48.294503 1043921 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d9vmc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.300525 1043921 pod_ready.go:94] pod "coredns-66bc5c9577-d9vmc" is "Ready"
	I1123 07:58:48.300560 1043921 pod_ready.go:86] duration metric: took 6.026831ms for pod "coredns-66bc5c9577-d9vmc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.303232 1043921 pod_ready.go:83] waiting for pod "etcd-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.309523 1043921 pod_ready.go:94] pod "etcd-addons-782760" is "Ready"
	I1123 07:58:48.309549 1043921 pod_ready.go:86] duration metric: took 6.293818ms for pod "etcd-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.312061 1043921 pod_ready.go:83] waiting for pod "kube-apiserver-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.316408 1043921 pod_ready.go:94] pod "kube-apiserver-addons-782760" is "Ready"
	I1123 07:58:48.316434 1043921 pod_ready.go:86] duration metric: took 4.347445ms for pod "kube-apiserver-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.318735 1043921 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.617398 1043921 pod_ready.go:94] pod "kube-controller-manager-addons-782760" is "Ready"
	I1123 07:58:48.617424 1043921 pod_ready.go:86] duration metric: took 298.66452ms for pod "kube-controller-manager-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.817704 1043921 pod_ready.go:83] waiting for pod "kube-proxy-jv2pd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:49.217308 1043921 pod_ready.go:94] pod "kube-proxy-jv2pd" is "Ready"
	I1123 07:58:49.217337 1043921 pod_ready.go:86] duration metric: took 399.60579ms for pod "kube-proxy-jv2pd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:49.418163 1043921 pod_ready.go:83] waiting for pod "kube-scheduler-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:49.817143 1043921 pod_ready.go:94] pod "kube-scheduler-addons-782760" is "Ready"
	I1123 07:58:49.817174 1043921 pod_ready.go:86] duration metric: took 398.98294ms for pod "kube-scheduler-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:49.817187 1043921 pod_ready.go:40] duration metric: took 1.603976757s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 07:58:49.874412 1043921 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 07:58:49.877676 1043921 out.go:179] * Done! kubectl is now configured to use "addons-782760" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:01:53 addons-782760 crio[830]: time="2025-11-23T08:01:53.560922968Z" level=info msg="Removed container 6ae410650f8232cdd04427aa115c13fff1c84d5e0a4d49f88b7d115a81fbea72: kube-system/registry-creds-764b6fb674-5m8ft/registry-creds" id=a0eb34f8-bff1-462f-9ef2-8358a7f2414e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.041831212Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-2jt7c/POD" id=afec9d88-bb9f-4ddc-92dd-6bca42c52dc1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.041919111Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.059784036Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-2jt7c Namespace:default ID:af5ffda02d1dfd7b7cf0c93e7c010d2eeaaa1759d8ba985fa77fbaad97d1f9c3 UID:a1de9e89-8a54-431b-90a9-a96e67c6ddb0 NetNS:/var/run/netns/d0027763-27c6-43e8-821a-d887bd711a06 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001bb2cd0}] Aliases:map[]}"
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.059827612Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-2jt7c to CNI network \"kindnet\" (type=ptp)"
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.076042509Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-2jt7c Namespace:default ID:af5ffda02d1dfd7b7cf0c93e7c010d2eeaaa1759d8ba985fa77fbaad97d1f9c3 UID:a1de9e89-8a54-431b-90a9-a96e67c6ddb0 NetNS:/var/run/netns/d0027763-27c6-43e8-821a-d887bd711a06 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001bb2cd0}] Aliases:map[]}"
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.076200724Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-2jt7c for CNI network kindnet (type=ptp)"
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.087509413Z" level=info msg="Ran pod sandbox af5ffda02d1dfd7b7cf0c93e7c010d2eeaaa1759d8ba985fa77fbaad97d1f9c3 with infra container: default/hello-world-app-5d498dc89-2jt7c/POD" id=afec9d88-bb9f-4ddc-92dd-6bca42c52dc1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.089141047Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ae1b8554-acf2-479b-93dd-269181b72c6c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.089280834Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=ae1b8554-acf2-479b-93dd-269181b72c6c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.089331195Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=ae1b8554-acf2-479b-93dd-269181b72c6c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.09164235Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=01de36a7-bd2e-4dc4-800c-88042035d533 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.092856395Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.802134304Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=01de36a7-bd2e-4dc4-800c-88042035d533 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.80295344Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9839934b-fc5e-4edb-b19b-2e161f09a851 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.805979782Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0814f5bb-a215-4e3e-9b36-56a4c3e5d9bd name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.812078307Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-2jt7c/hello-world-app" id=e5f3427a-9abb-4a6b-8732-7835c6a0ec94 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.812600786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.824776217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.825096766Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a0743dc6a2ad3b0b1a1bed095dc970af31db69a0168d897841d9a5a121f662e2/merged/etc/passwd: no such file or directory"
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.825195077Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a0743dc6a2ad3b0b1a1bed095dc970af31db69a0168d897841d9a5a121f662e2/merged/etc/group: no such file or directory"
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.827426504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.861155361Z" level=info msg="Created container c5f15acd3872f23ba530751b3cfad877c3230ed8dd0b55b9a7ceaded03d65113: default/hello-world-app-5d498dc89-2jt7c/hello-world-app" id=e5f3427a-9abb-4a6b-8732-7835c6a0ec94 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.863390325Z" level=info msg="Starting container: c5f15acd3872f23ba530751b3cfad877c3230ed8dd0b55b9a7ceaded03d65113" id=7ce5bf19-879e-4d82-a7f9-5c2399e8f8b1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:01:58 addons-782760 crio[830]: time="2025-11-23T08:01:58.869847913Z" level=info msg="Started container" PID=7139 containerID=c5f15acd3872f23ba530751b3cfad877c3230ed8dd0b55b9a7ceaded03d65113 description=default/hello-world-app-5d498dc89-2jt7c/hello-world-app id=7ce5bf19-879e-4d82-a7f9-5c2399e8f8b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=af5ffda02d1dfd7b7cf0c93e7c010d2eeaaa1759d8ba985fa77fbaad97d1f9c3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	c5f15acd3872f       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   af5ffda02d1df       hello-world-app-5d498dc89-2jt7c            default
	87226ea45f37b       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             6 seconds ago            Exited              registry-creds                           1                   45f8bec7c3d2c       registry-creds-764b6fb674-5m8ft            kube-system
	0b3fb3dd69391       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   e7e8bf584d2ac       nginx                                      default
	3db8fe62cb746       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   5bb3d71b77784       busybox                                    default
	e6b397fa20b7b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   83040823fe3e0       gcp-auth-78565c9fb4-ntzsg                  gcp-auth
	b7dbc42af3eaa       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   78aa636a62e32       csi-hostpathplugin-8j7r2                   kube-system
	e81b53e67dd69       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   78aa636a62e32       csi-hostpathplugin-8j7r2                   kube-system
	25c0aa23665db       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   78aa636a62e32       csi-hostpathplugin-8j7r2                   kube-system
	654a0f71268c2       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   78aa636a62e32       csi-hostpathplugin-8j7r2                   kube-system
	05fe963f89f66       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   78aa636a62e32       csi-hostpathplugin-8j7r2                   kube-system
	96dcbe281d6f8       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   852a4ec0a6e40       gadget-pqgzc                               gadget
	0aa44e320a18e       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   bce481b6afdec       ingress-nginx-controller-6c8bf45fb-7jxcp   ingress-nginx
	ff09ce175fe75       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   c78167bd447eb       csi-hostpath-attacher-0                    kube-system
	7ca479867b243       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   78aa636a62e32       csi-hostpathplugin-8j7r2                   kube-system
	35dd0f9bcb50a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   2c16a8b01a95f       registry-proxy-crmkh                       kube-system
	90e12086b17a9       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   aba90cf2826d7       nvidia-device-plugin-daemonset-stqrq       kube-system
	410c2359fb0c0       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   a054f5920c449       snapshot-controller-7d9fbc56b8-wqcnm       kube-system
	774655bd891a2       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   602d4f62989cc       yakd-dashboard-5ff678cb9-6j7jv             yakd-dashboard
	9311aa036bd97       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   5480a41940775       kube-ingress-dns-minikube                  kube-system
	1d4e31902581e       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   9e63ad95dec7f       registry-6b586f9694-rblw8                  kube-system
	9734ce796f3ef       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   4bff8ec0f7b4c       csi-hostpath-resizer-0                     kube-system
	8d569fddb15d1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   4 minutes ago            Exited              patch                                    0                   5992efcdc388f       ingress-nginx-admission-patch-g4ft4        ingress-nginx
	fb98b04224a9c       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   172424698e105       metrics-server-85b7d694d7-l4cfr            kube-system
	e99c3d22230ac       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   4 minutes ago            Exited              create                                   0                   43512c09e295d       ingress-nginx-admission-create-c8dgn       ingress-nginx
	8e887d5a1cac1       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   0ecd11a14c527       local-path-provisioner-648f6765c9-7zdjv    local-path-storage
	d2ffd09041ccf       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   b42228ca86d71       snapshot-controller-7d9fbc56b8-4rwkm       kube-system
	ea6b982ceb37f       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               4 minutes ago            Running             cloud-spanner-emulator                   0                   d1a586cc7c66f       cloud-spanner-emulator-5bdddb765-wn4d8     default
	685798fa38932       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   dae52078c9fdc       storage-provisioner                        kube-system
	01a96c05c2e23       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   306c096ab40e7       coredns-66bc5c9577-d9vmc                   kube-system
	995c0ad221a0e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   c75d75de3c2c8       kube-proxy-jv2pd                           kube-system
	d3d5fbc406391       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   6c9e098586e6d       kindnet-qrqlv                              kube-system
	03fd92afca30f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   de7486f71e017       etcd-addons-782760                         kube-system
	7b54407c8a503       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   eab14dfb03869       kube-scheduler-addons-782760               kube-system
	4952e333e5cbc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   d46c7e1746d6f       kube-controller-manager-addons-782760      kube-system
	1e9a39b963c81       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   1ce9f5174506d       kube-apiserver-addons-782760               kube-system
	
	
	==> coredns [01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486] <==
	[INFO] 10.244.0.15:56386 - 14745 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003411734s
	[INFO] 10.244.0.15:56386 - 57024 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00018397s
	[INFO] 10.244.0.15:56386 - 43933 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000121031s
	[INFO] 10.244.0.15:43531 - 51117 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000190058s
	[INFO] 10.244.0.15:43531 - 50854 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149066s
	[INFO] 10.244.0.15:49798 - 46275 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113072s
	[INFO] 10.244.0.15:49798 - 46513 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091033s
	[INFO] 10.244.0.15:57131 - 10808 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012304s
	[INFO] 10.244.0.15:57131 - 10529 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090606s
	[INFO] 10.244.0.15:42837 - 61578 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001699414s
	[INFO] 10.244.0.15:42837 - 61825 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001564624s
	[INFO] 10.244.0.15:52120 - 57605 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000174551s
	[INFO] 10.244.0.15:52120 - 57392 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000294441s
	[INFO] 10.244.0.21:37004 - 63790 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017803s
	[INFO] 10.244.0.21:40712 - 33026 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000457603s
	[INFO] 10.244.0.21:33073 - 42541 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143898s
	[INFO] 10.244.0.21:48780 - 21993 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000101092s
	[INFO] 10.244.0.21:52179 - 41164 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106803s
	[INFO] 10.244.0.21:41575 - 21852 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090344s
	[INFO] 10.244.0.21:41629 - 61232 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002064918s
	[INFO] 10.244.0.21:37794 - 19102 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001709506s
	[INFO] 10.244.0.21:42252 - 65258 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000751569s
	[INFO] 10.244.0.21:47660 - 18640 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003215325s
	[INFO] 10.244.0.23:36263 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000157772s
	[INFO] 10.244.0.23:54909 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157075s
	
	
	==> describe nodes <==
	Name:               addons-782760
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-782760
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=addons-782760
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T07_56_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-782760
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-782760"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 07:56:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-782760
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:01:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 07:59:57 +0000   Sun, 23 Nov 2025 07:56:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 07:59:57 +0000   Sun, 23 Nov 2025 07:56:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 07:59:57 +0000   Sun, 23 Nov 2025 07:56:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 07:59:57 +0000   Sun, 23 Nov 2025 07:57:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-782760
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                e4972c17-bf29-4288-839a-93a0193f5931
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     cloud-spanner-emulator-5bdddb765-wn4d8      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  default                     hello-world-app-5d498dc89-2jt7c             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-pqgzc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  gcp-auth                    gcp-auth-78565c9fb4-ntzsg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-7jxcp    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m55s
	  kube-system                 coredns-66bc5c9577-d9vmc                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m1s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 csi-hostpathplugin-8j7r2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 etcd-addons-782760                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m6s
	  kube-system                 kindnet-qrqlv                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m1s
	  kube-system                 kube-apiserver-addons-782760                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-controller-manager-addons-782760       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-jv2pd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-scheduler-addons-782760                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 metrics-server-85b7d694d7-l4cfr             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m55s
	  kube-system                 nvidia-device-plugin-daemonset-stqrq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 registry-6b586f9694-rblw8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 registry-creds-764b6fb674-5m8ft             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 registry-proxy-crmkh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 snapshot-controller-7d9fbc56b8-4rwkm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 snapshot-controller-7d9fbc56b8-wqcnm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  local-path-storage          local-path-provisioner-648f6765c9-7zdjv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-6j7jv              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m59s  kube-proxy       
	  Normal   Starting                 5m6s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m6s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m6s   kubelet          Node addons-782760 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m6s   kubelet          Node addons-782760 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m6s   kubelet          Node addons-782760 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m2s   node-controller  Node addons-782760 event: Registered Node addons-782760 in Controller
	  Normal   NodeReady                4m20s  kubelet          Node addons-782760 status is now: NodeReady
	
	
	==> dmesg <==
	[ +30.904426] overlayfs: idmapped layers are currently not supported
	[Nov23 07:10] overlayfs: idmapped layers are currently not supported
	[Nov23 07:12] overlayfs: idmapped layers are currently not supported
	[Nov23 07:13] overlayfs: idmapped layers are currently not supported
	[Nov23 07:14] overlayfs: idmapped layers are currently not supported
	[ +16.709544] overlayfs: idmapped layers are currently not supported
	[ +39.052436] overlayfs: idmapped layers are currently not supported
	[Nov23 07:16] overlayfs: idmapped layers are currently not supported
	[Nov23 07:17] overlayfs: idmapped layers are currently not supported
	[Nov23 07:18] overlayfs: idmapped layers are currently not supported
	[ +42.777291] overlayfs: idmapped layers are currently not supported
	[Nov23 07:19] overlayfs: idmapped layers are currently not supported
	[Nov23 07:20] overlayfs: idmapped layers are currently not supported
	[Nov23 07:21] overlayfs: idmapped layers are currently not supported
	[ +25.538176] overlayfs: idmapped layers are currently not supported
	[Nov23 07:22] overlayfs: idmapped layers are currently not supported
	[ +17.484475] overlayfs: idmapped layers are currently not supported
	[Nov23 07:23] overlayfs: idmapped layers are currently not supported
	[Nov23 07:24] overlayfs: idmapped layers are currently not supported
	[Nov23 07:25] overlayfs: idmapped layers are currently not supported
	[Nov23 07:26] overlayfs: idmapped layers are currently not supported
	[Nov23 07:27] overlayfs: idmapped layers are currently not supported
	[ +38.121959] overlayfs: idmapped layers are currently not supported
	[Nov23 07:55] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 07:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8] <==
	{"level":"warn","ts":"2025-11-23T07:56:49.367891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.383087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.408523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.439739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.463751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.484391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.516667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.519244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.552798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.574300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.582074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.598288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.620015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.653056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.662612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.697865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.721015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.730790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.840303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:57:05.720418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:57:05.743929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:57:27.719377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:57:27.734312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:57:27.771152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:57:27.779450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35708","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [e6b397fa20b7b42f76e17ad2ed2e50d2ded0d57757201cd0fcc2d4d1aa701e3a] <==
	2025/11/23 07:58:47 GCP Auth Webhook started!
	2025/11/23 07:58:50 Ready to marshal response ...
	2025/11/23 07:58:50 Ready to write response ...
	2025/11/23 07:58:50 Ready to marshal response ...
	2025/11/23 07:58:50 Ready to write response ...
	2025/11/23 07:58:50 Ready to marshal response ...
	2025/11/23 07:58:50 Ready to write response ...
	2025/11/23 07:59:10 Ready to marshal response ...
	2025/11/23 07:59:10 Ready to write response ...
	2025/11/23 07:59:12 Ready to marshal response ...
	2025/11/23 07:59:12 Ready to write response ...
	2025/11/23 07:59:12 Ready to marshal response ...
	2025/11/23 07:59:12 Ready to write response ...
	2025/11/23 07:59:20 Ready to marshal response ...
	2025/11/23 07:59:20 Ready to write response ...
	2025/11/23 07:59:36 Ready to marshal response ...
	2025/11/23 07:59:36 Ready to write response ...
	2025/11/23 07:59:41 Ready to marshal response ...
	2025/11/23 07:59:41 Ready to write response ...
	2025/11/23 08:00:08 Ready to marshal response ...
	2025/11/23 08:00:08 Ready to write response ...
	2025/11/23 08:01:57 Ready to marshal response ...
	2025/11/23 08:01:57 Ready to write response ...
	
	
	==> kernel <==
	 08:01:59 up  8:44,  0 user,  load average: 0.49, 0.95, 0.88
	Linux addons-782760 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263] <==
	I1123 07:59:59.608398       1 main.go:301] handling current node
	I1123 08:00:09.609165       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:00:09.609196       1 main.go:301] handling current node
	I1123 08:00:19.608270       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:00:19.608302       1 main.go:301] handling current node
	I1123 08:00:29.615286       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:00:29.615402       1 main.go:301] handling current node
	I1123 08:00:39.610398       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:00:39.610516       1 main.go:301] handling current node
	I1123 08:00:49.609300       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:00:49.609341       1 main.go:301] handling current node
	I1123 08:00:59.608639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:00:59.608670       1 main.go:301] handling current node
	I1123 08:01:09.610686       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:01:09.610720       1 main.go:301] handling current node
	I1123 08:01:19.617329       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:01:19.617364       1 main.go:301] handling current node
	I1123 08:01:29.617358       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:01:29.617390       1 main.go:301] handling current node
	I1123 08:01:39.614203       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:01:39.614236       1 main.go:301] handling current node
	I1123 08:01:49.617171       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:01:49.617203       1 main.go:301] handling current node
	I1123 08:01:59.609349       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:01:59.609381       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3] <==
	W1123 07:57:27.734128       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 07:57:27.760968       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1123 07:57:27.779089       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 07:57:39.731435       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.168.210:443: connect: connection refused
	E1123 07:57:39.731534       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.168.210:443: connect: connection refused" logger="UnhandledError"
	W1123 07:57:39.731995       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.168.210:443: connect: connection refused
	E1123 07:57:39.732078       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.168.210:443: connect: connection refused" logger="UnhandledError"
	W1123 07:57:39.825320       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.168.210:443: connect: connection refused
	E1123 07:57:39.825363       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.168.210:443: connect: connection refused" logger="UnhandledError"
	W1123 07:57:53.720693       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 07:57:53.721085       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 07:57:53.720995       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.124.72:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.124.72:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.124.72:443: connect: connection refused" logger="UnhandledError"
	E1123 07:57:53.723596       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.124.72:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.124.72:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.124.72:443: connect: connection refused" logger="UnhandledError"
	E1123 07:57:53.729421       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.124.72:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.124.72:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.124.72:443: connect: connection refused" logger="UnhandledError"
	I1123 07:57:53.908608       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1123 07:58:59.769556       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42130: use of closed network connection
	E1123 07:58:59.990994       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42158: use of closed network connection
	E1123 07:59:00.326038       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42166: use of closed network connection
	I1123 07:59:35.933077       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1123 07:59:36.232920       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.207.182"}
	I1123 07:59:52.725994       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1123 08:01:57.881657       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.184.202"}
	
	
	==> kube-controller-manager [4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3] <==
	I1123 07:56:57.744631       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 07:56:57.745708       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 07:56:57.745724       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 07:56:57.746910       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 07:56:57.747084       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 07:56:57.747214       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 07:56:57.748324       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 07:56:57.749610       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 07:56:57.750465       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 07:56:57.752932       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 07:56:57.752951       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 07:56:57.753040       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 07:56:57.753085       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 07:56:57.753113       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 07:56:57.753142       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 07:56:57.762219       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-782760" podCIDRs=["10.244.0.0/24"]
	E1123 07:57:04.231863       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1123 07:57:27.712138       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 07:57:27.712300       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1123 07:57:27.712365       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 07:57:27.742053       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1123 07:57:27.750509       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 07:57:27.812754       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 07:57:27.851533       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 07:57:42.709683       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45] <==
	I1123 07:56:59.924406       1 server_linux.go:53] "Using iptables proxy"
	I1123 07:57:00.037896       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 07:57:00.142186       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 07:57:00.142250       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 07:57:00.142350       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 07:57:00.358112       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 07:57:00.358190       1 server_linux.go:132] "Using iptables Proxier"
	I1123 07:57:00.371626       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 07:57:00.372025       1 server.go:527] "Version info" version="v1.34.1"
	I1123 07:57:00.372045       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 07:57:00.389924       1 config.go:106] "Starting endpoint slice config controller"
	I1123 07:57:00.389957       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 07:57:00.390338       1 config.go:200] "Starting service config controller"
	I1123 07:57:00.390346       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 07:57:00.390683       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 07:57:00.390691       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 07:57:00.394041       1 config.go:309] "Starting node config controller"
	I1123 07:57:00.394139       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 07:57:00.394163       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 07:57:00.490450       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 07:57:00.490526       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 07:57:00.491432       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a] <==
	I1123 07:56:51.215967       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 07:56:51.216037       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 07:56:51.216373       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 07:56:51.216428       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 07:56:51.224299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 07:56:51.226728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 07:56:51.226911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 07:56:51.227012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 07:56:51.227228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 07:56:51.227357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 07:56:51.227460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 07:56:51.229708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 07:56:51.229832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 07:56:51.229917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 07:56:51.230149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 07:56:51.230259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 07:56:51.230357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 07:56:51.230452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 07:56:51.230539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 07:56:51.230645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 07:56:51.230796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 07:56:51.230890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 07:56:51.231034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 07:56:52.210210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 07:56:54.916158       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:00:16 addons-782760 kubelet[1275]: E1123 08:00:16.499635    1275 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ed4472e409ace13fafe3985f10a4c7764f9f1113e57bbbda3696b48cf3d44be\": container with ID starting with 3ed4472e409ace13fafe3985f10a4c7764f9f1113e57bbbda3696b48cf3d44be not found: ID does not exist" containerID="3ed4472e409ace13fafe3985f10a4c7764f9f1113e57bbbda3696b48cf3d44be"
	Nov 23 08:00:16 addons-782760 kubelet[1275]: I1123 08:00:16.499799    1275 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ed4472e409ace13fafe3985f10a4c7764f9f1113e57bbbda3696b48cf3d44be"} err="failed to get container status \"3ed4472e409ace13fafe3985f10a4c7764f9f1113e57bbbda3696b48cf3d44be\": rpc error: code = NotFound desc = could not find container \"3ed4472e409ace13fafe3985f10a4c7764f9f1113e57bbbda3696b48cf3d44be\": container with ID starting with 3ed4472e409ace13fafe3985f10a4c7764f9f1113e57bbbda3696b48cf3d44be not found: ID does not exist"
	Nov 23 08:00:16 addons-782760 kubelet[1275]: I1123 08:00:16.531593    1275 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rz9sl\" (UniqueName: \"kubernetes.io/projected/aa65de24-c269-4995-bfe7-e991fef8da93-kube-api-access-rz9sl\") on node \"addons-782760\" DevicePath \"\""
	Nov 23 08:00:16 addons-782760 kubelet[1275]: I1123 08:00:16.531787    1275 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-4ffcd5d0-bc23-44d2-b098-eb5d74ee4d84\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6d69ea8a-c842-11f0-96bf-feb283ae8ceb\") on node \"addons-782760\" "
	Nov 23 08:00:16 addons-782760 kubelet[1275]: I1123 08:00:16.531871    1275 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/aa65de24-c269-4995-bfe7-e991fef8da93-gcp-creds\") on node \"addons-782760\" DevicePath \"\""
	Nov 23 08:00:16 addons-782760 kubelet[1275]: I1123 08:00:16.539628    1275 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-4ffcd5d0-bc23-44d2-b098-eb5d74ee4d84" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^6d69ea8a-c842-11f0-96bf-feb283ae8ceb") on node "addons-782760"
	Nov 23 08:00:16 addons-782760 kubelet[1275]: I1123 08:00:16.632574    1275 reconciler_common.go:299] "Volume detached for volume \"pvc-4ffcd5d0-bc23-44d2-b098-eb5d74ee4d84\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^6d69ea8a-c842-11f0-96bf-feb283ae8ceb\") on node \"addons-782760\" DevicePath \"\""
	Nov 23 08:00:17 addons-782760 kubelet[1275]: I1123 08:00:17.289429    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-stqrq" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:00:17 addons-782760 kubelet[1275]: I1123 08:00:17.293573    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa65de24-c269-4995-bfe7-e991fef8da93" path="/var/lib/kubelet/pods/aa65de24-c269-4995-bfe7-e991fef8da93/volumes"
	Nov 23 08:00:37 addons-782760 kubelet[1275]: I1123 08:00:37.289747    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-rblw8" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:00:38 addons-782760 kubelet[1275]: I1123 08:00:38.289229    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-crmkh" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:01:36 addons-782760 kubelet[1275]: I1123 08:01:36.288925    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-stqrq" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:01:50 addons-782760 kubelet[1275]: I1123 08:01:50.089775    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-5m8ft" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:01:52 addons-782760 kubelet[1275]: I1123 08:01:52.794050    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-5m8ft" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:01:52 addons-782760 kubelet[1275]: I1123 08:01:52.794113    1275 scope.go:117] "RemoveContainer" containerID="6ae410650f8232cdd04427aa115c13fff1c84d5e0a4d49f88b7d115a81fbea72"
	Nov 23 08:01:53 addons-782760 kubelet[1275]: E1123 08:01:53.418258    1275 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bc8e8fcae51cb4756b88e8d3a960129353c595080469414a9b4a0dfa718ecb05/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bc8e8fcae51cb4756b88e8d3a960129353c595080469414a9b4a0dfa718ecb05/diff: no such file or directory, extraDiskErr: <nil>
	Nov 23 08:01:53 addons-782760 kubelet[1275]: I1123 08:01:53.541784    1275 scope.go:117] "RemoveContainer" containerID="6ae410650f8232cdd04427aa115c13fff1c84d5e0a4d49f88b7d115a81fbea72"
	Nov 23 08:01:53 addons-782760 kubelet[1275]: I1123 08:01:53.799964    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-5m8ft" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:01:53 addons-782760 kubelet[1275]: I1123 08:01:53.800766    1275 scope.go:117] "RemoveContainer" containerID="87226ea45f37b2f732cdb31cb050b7137ec654c5751be344d1f05ac17d29e3ec"
	Nov 23 08:01:53 addons-782760 kubelet[1275]: E1123 08:01:53.801316    1275 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-5m8ft_kube-system(6908fc1b-d56b-4159-bae1-3a2c7f324b9e)\"" pod="kube-system/registry-creds-764b6fb674-5m8ft" podUID="6908fc1b-d56b-4159-bae1-3a2c7f324b9e"
	Nov 23 08:01:57 addons-782760 kubelet[1275]: I1123 08:01:57.753027    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a1de9e89-8a54-431b-90a9-a96e67c6ddb0-gcp-creds\") pod \"hello-world-app-5d498dc89-2jt7c\" (UID: \"a1de9e89-8a54-431b-90a9-a96e67c6ddb0\") " pod="default/hello-world-app-5d498dc89-2jt7c"
	Nov 23 08:01:57 addons-782760 kubelet[1275]: I1123 08:01:57.753593    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9h2w\" (UniqueName: \"kubernetes.io/projected/a1de9e89-8a54-431b-90a9-a96e67c6ddb0-kube-api-access-f9h2w\") pod \"hello-world-app-5d498dc89-2jt7c\" (UID: \"a1de9e89-8a54-431b-90a9-a96e67c6ddb0\") " pod="default/hello-world-app-5d498dc89-2jt7c"
	Nov 23 08:01:58 addons-782760 kubelet[1275]: W1123 08:01:58.084967    1275 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185/crio-af5ffda02d1dfd7b7cf0c93e7c010d2eeaaa1759d8ba985fa77fbaad97d1f9c3 WatchSource:0}: Error finding container af5ffda02d1dfd7b7cf0c93e7c010d2eeaaa1759d8ba985fa77fbaad97d1f9c3: Status 404 returned error can't find the container with id af5ffda02d1dfd7b7cf0c93e7c010d2eeaaa1759d8ba985fa77fbaad97d1f9c3
	Nov 23 08:01:59 addons-782760 kubelet[1275]: I1123 08:01:59.289353    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-rblw8" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:01:59 addons-782760 kubelet[1275]: I1123 08:01:59.879771    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-2jt7c" podStartSLOduration=2.165267584 podStartE2EDuration="2.87975231s" podCreationTimestamp="2025-11-23 08:01:57 +0000 UTC" firstStartedPulling="2025-11-23 08:01:58.089581822 +0000 UTC m=+304.910478389" lastFinishedPulling="2025-11-23 08:01:58.804066548 +0000 UTC m=+305.624963115" observedRunningTime="2025-11-23 08:01:59.879322308 +0000 UTC m=+306.700218900" watchObservedRunningTime="2025-11-23 08:01:59.87975231 +0000 UTC m=+306.700648886"
	
	
	==> storage-provisioner [685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47] <==
	W1123 08:01:36.056278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:38.059603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:38.064065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:40.067795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:40.074502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:42.078553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:42.084129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:44.087599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:44.093174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:46.096021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:46.101196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:48.104232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:48.108952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:50.114298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:50.119618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:52.123100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:52.130706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:54.133238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:54.137676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:56.141492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:56.146183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:58.149714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:01:58.155408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:02:00.170423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:02:00.213731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-782760 -n addons-782760
helpers_test.go:269: (dbg) Run:  kubectl --context addons-782760 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-c8dgn ingress-nginx-admission-patch-g4ft4
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-782760 describe pod ingress-nginx-admission-create-c8dgn ingress-nginx-admission-patch-g4ft4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-782760 describe pod ingress-nginx-admission-create-c8dgn ingress-nginx-admission-patch-g4ft4: exit status 1 (98.185779ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-c8dgn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-g4ft4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-782760 describe pod ingress-nginx-admission-create-c8dgn ingress-nginx-admission-patch-g4ft4: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (274.565761ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:02:01.321465 1053586 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:02:01.323074 1053586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:02:01.323097 1053586 out.go:374] Setting ErrFile to fd 2...
	I1123 08:02:01.323104 1053586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:02:01.323412 1053586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:02:01.323751 1053586 mustload.go:66] Loading cluster: addons-782760
	I1123 08:02:01.324214 1053586 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:02:01.324234 1053586 addons.go:622] checking whether the cluster is paused
	I1123 08:02:01.324344 1053586 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:02:01.324359 1053586 host.go:66] Checking if "addons-782760" exists ...
	I1123 08:02:01.324872 1053586 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 08:02:01.341963 1053586 ssh_runner.go:195] Run: systemctl --version
	I1123 08:02:01.342062 1053586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 08:02:01.362548 1053586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 08:02:01.469905 1053586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:02:01.470041 1053586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:02:01.502226 1053586 cri.go:89] found id: "87226ea45f37b2f732cdb31cb050b7137ec654c5751be344d1f05ac17d29e3ec"
	I1123 08:02:01.502248 1053586 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 08:02:01.502254 1053586 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 08:02:01.502258 1053586 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 08:02:01.502262 1053586 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 08:02:01.502266 1053586 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 08:02:01.502269 1053586 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 08:02:01.502273 1053586 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 08:02:01.502277 1053586 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 08:02:01.502282 1053586 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 08:02:01.502286 1053586 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 08:02:01.502292 1053586 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 08:02:01.502295 1053586 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 08:02:01.502299 1053586 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 08:02:01.502304 1053586 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 08:02:01.502352 1053586 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 08:02:01.502398 1053586 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 08:02:01.502405 1053586 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 08:02:01.502409 1053586 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 08:02:01.502412 1053586 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 08:02:01.502417 1053586 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 08:02:01.502420 1053586 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 08:02:01.502423 1053586 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 08:02:01.502429 1053586 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 08:02:01.502433 1053586 cri.go:89] found id: ""
	I1123 08:02:01.502484 1053586 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:02:01.520406 1053586 out.go:203] 
	W1123 08:02:01.523288 1053586 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:02:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:02:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:02:01.523320 1053586 out.go:285] * 
	* 
	W1123 08:02:01.533197 1053586 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:02:01.536366 1053586 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable ingress --alsologtostderr -v=1: exit status 11 (289.423749ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:02:01.600534 1053629 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:02:01.601327 1053629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:02:01.601365 1053629 out.go:374] Setting ErrFile to fd 2...
	I1123 08:02:01.601389 1053629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:02:01.602133 1053629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:02:01.602915 1053629 mustload.go:66] Loading cluster: addons-782760
	I1123 08:02:01.604011 1053629 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:02:01.604115 1053629 addons.go:622] checking whether the cluster is paused
	I1123 08:02:01.604355 1053629 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:02:01.604378 1053629 host.go:66] Checking if "addons-782760" exists ...
	I1123 08:02:01.605026 1053629 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 08:02:01.622467 1053629 ssh_runner.go:195] Run: systemctl --version
	I1123 08:02:01.622531 1053629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 08:02:01.642595 1053629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 08:02:01.756099 1053629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:02:01.756239 1053629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:02:01.788820 1053629 cri.go:89] found id: "87226ea45f37b2f732cdb31cb050b7137ec654c5751be344d1f05ac17d29e3ec"
	I1123 08:02:01.788845 1053629 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 08:02:01.788850 1053629 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 08:02:01.788854 1053629 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 08:02:01.788857 1053629 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 08:02:01.788861 1053629 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 08:02:01.788870 1053629 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 08:02:01.788874 1053629 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 08:02:01.788877 1053629 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 08:02:01.788884 1053629 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 08:02:01.788888 1053629 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 08:02:01.788891 1053629 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 08:02:01.788895 1053629 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 08:02:01.788898 1053629 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 08:02:01.788901 1053629 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 08:02:01.788906 1053629 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 08:02:01.788913 1053629 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 08:02:01.788917 1053629 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 08:02:01.788920 1053629 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 08:02:01.788923 1053629 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 08:02:01.788930 1053629 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 08:02:01.788935 1053629 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 08:02:01.788939 1053629 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 08:02:01.788942 1053629 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 08:02:01.788945 1053629 cri.go:89] found id: ""
	I1123 08:02:01.788999 1053629 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:02:01.812498 1053629 out.go:203] 
	W1123 08:02:01.815545 1053629 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:02:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:02:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:02:01.815568 1053629 out.go:285] * 
	* 
	W1123 08:02:01.823921 1053629 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:02:01.827046 1053629 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.21s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-pqgzc" [28ce4592-befe-4207-955a-a4af62277a06] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003487819s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (283.219898ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:59:35.386205 1051374 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:59:35.392757 1051374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:35.392815 1051374 out.go:374] Setting ErrFile to fd 2...
	I1123 07:59:35.392838 1051374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:35.393150 1051374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 07:59:35.393510 1051374 mustload.go:66] Loading cluster: addons-782760
	I1123 07:59:35.393980 1051374 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:35.394028 1051374 addons.go:622] checking whether the cluster is paused
	I1123 07:59:35.394164 1051374 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:35.394201 1051374 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:59:35.394744 1051374 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:59:35.425433 1051374 ssh_runner.go:195] Run: systemctl --version
	I1123 07:59:35.425483 1051374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:59:35.447642 1051374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:59:35.553930 1051374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:59:35.554014 1051374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:59:35.584724 1051374 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 07:59:35.584743 1051374 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 07:59:35.584748 1051374 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 07:59:35.584751 1051374 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 07:59:35.584754 1051374 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 07:59:35.584758 1051374 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 07:59:35.584761 1051374 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 07:59:35.584764 1051374 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 07:59:35.584767 1051374 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 07:59:35.584773 1051374 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 07:59:35.584777 1051374 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 07:59:35.584780 1051374 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 07:59:35.584783 1051374 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 07:59:35.584786 1051374 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 07:59:35.584789 1051374 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 07:59:35.584797 1051374 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 07:59:35.584800 1051374 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 07:59:35.584805 1051374 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 07:59:35.584808 1051374 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 07:59:35.584811 1051374 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 07:59:35.584816 1051374 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 07:59:35.584820 1051374 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 07:59:35.584822 1051374 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 07:59:35.584825 1051374 cri.go:89] found id: ""
	I1123 07:59:35.584875 1051374 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:59:35.599236 1051374 out.go:203] 
	W1123 07:59:35.601901 1051374 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:59:35.601925 1051374 out.go:285] * 
	* 
	W1123 07:59:35.610217 1051374 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:59:35.613267 1051374 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.26373ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004017705s
addons_test.go:463: (dbg) Run:  kubectl --context addons-782760 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (283.765967ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:59:29.112822 1051282 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:59:29.113789 1051282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:29.113811 1051282 out.go:374] Setting ErrFile to fd 2...
	I1123 07:59:29.113818 1051282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:29.114097 1051282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 07:59:29.114442 1051282 mustload.go:66] Loading cluster: addons-782760
	I1123 07:59:29.114831 1051282 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:29.114851 1051282 addons.go:622] checking whether the cluster is paused
	I1123 07:59:29.114965 1051282 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:29.114981 1051282 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:59:29.115534 1051282 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:59:29.138078 1051282 ssh_runner.go:195] Run: systemctl --version
	I1123 07:59:29.138137 1051282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:59:29.157223 1051282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:59:29.261500 1051282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:59:29.261589 1051282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:59:29.295235 1051282 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 07:59:29.295258 1051282 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 07:59:29.295264 1051282 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 07:59:29.295268 1051282 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 07:59:29.295272 1051282 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 07:59:29.295275 1051282 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 07:59:29.295278 1051282 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 07:59:29.295281 1051282 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 07:59:29.295284 1051282 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 07:59:29.295290 1051282 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 07:59:29.295293 1051282 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 07:59:29.295296 1051282 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 07:59:29.295299 1051282 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 07:59:29.295302 1051282 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 07:59:29.295306 1051282 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 07:59:29.295313 1051282 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 07:59:29.295320 1051282 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 07:59:29.295325 1051282 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 07:59:29.295328 1051282 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 07:59:29.295331 1051282 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 07:59:29.295335 1051282 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 07:59:29.295342 1051282 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 07:59:29.295345 1051282 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 07:59:29.295348 1051282 cri.go:89] found id: ""
	I1123 07:59:29.295401 1051282 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:59:29.310141 1051282 out.go:203] 
	W1123 07:59:29.313077 1051282 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:59:29.313102 1051282 out.go:285] * 
	* 
	W1123 07:59:29.321083 1051282 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:59:29.324184 1051282 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1123 07:59:21.109108 1043159 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1123 07:59:21.112893 1043159 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 07:59:21.112919 1043159 kapi.go:107] duration metric: took 3.822562ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.832654ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-782760 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-782760 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [96cfe33f-05bb-4c9a-99f1-d51a70a03e7f] Pending
helpers_test.go:352: "task-pv-pod" [96cfe33f-05bb-4c9a-99f1-d51a70a03e7f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [96cfe33f-05bb-4c9a-99f1-d51a70a03e7f] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003568851s
addons_test.go:572: (dbg) Run:  kubectl --context addons-782760 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-782760 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-782760 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-782760 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-782760 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-782760 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-782760 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [aa65de24-c269-4995-bfe7-e991fef8da93] Pending
helpers_test.go:352: "task-pv-pod-restore" [aa65de24-c269-4995-bfe7-e991fef8da93] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [aa65de24-c269-4995-bfe7-e991fef8da93] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004124674s
addons_test.go:614: (dbg) Run:  kubectl --context addons-782760 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-782760 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-782760 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (270.094481ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:00:16.928419 1052356 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:00:16.929282 1052356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:00:16.929320 1052356 out.go:374] Setting ErrFile to fd 2...
	I1123 08:00:16.929342 1052356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:00:16.929623 1052356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:00:16.929945 1052356 mustload.go:66] Loading cluster: addons-782760
	I1123 08:00:16.930382 1052356 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:00:16.930421 1052356 addons.go:622] checking whether the cluster is paused
	I1123 08:00:16.930566 1052356 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:00:16.930597 1052356 host.go:66] Checking if "addons-782760" exists ...
	I1123 08:00:16.931146 1052356 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 08:00:16.949531 1052356 ssh_runner.go:195] Run: systemctl --version
	I1123 08:00:16.949579 1052356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 08:00:16.975681 1052356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 08:00:17.085745 1052356 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:00:17.085831 1052356 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:00:17.115420 1052356 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 08:00:17.115439 1052356 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 08:00:17.115443 1052356 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 08:00:17.115447 1052356 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 08:00:17.115450 1052356 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 08:00:17.115453 1052356 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 08:00:17.115456 1052356 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 08:00:17.115459 1052356 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 08:00:17.115462 1052356 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 08:00:17.115469 1052356 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 08:00:17.115472 1052356 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 08:00:17.115475 1052356 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 08:00:17.115478 1052356 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 08:00:17.115480 1052356 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 08:00:17.115483 1052356 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 08:00:17.115488 1052356 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 08:00:17.115491 1052356 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 08:00:17.115494 1052356 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 08:00:17.115497 1052356 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 08:00:17.115500 1052356 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 08:00:17.115504 1052356 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 08:00:17.115507 1052356 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 08:00:17.115510 1052356 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 08:00:17.115513 1052356 cri.go:89] found id: ""
	I1123 08:00:17.115563 1052356 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:00:17.130518 1052356 out.go:203] 
	W1123 08:00:17.133364 1052356 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:00:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:00:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:00:17.133388 1052356 out.go:285] * 
	* 
	W1123 08:00:17.141464 1052356 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:00:17.144535 1052356 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (264.557408ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:00:17.204572 1052400 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:00:17.205826 1052400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:00:17.205843 1052400 out.go:374] Setting ErrFile to fd 2...
	I1123 08:00:17.205849 1052400 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:00:17.206126 1052400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:00:17.206431 1052400 mustload.go:66] Loading cluster: addons-782760
	I1123 08:00:17.206856 1052400 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:00:17.206873 1052400 addons.go:622] checking whether the cluster is paused
	I1123 08:00:17.206981 1052400 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:00:17.206996 1052400 host.go:66] Checking if "addons-782760" exists ...
	I1123 08:00:17.207533 1052400 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 08:00:17.223663 1052400 ssh_runner.go:195] Run: systemctl --version
	I1123 08:00:17.223732 1052400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 08:00:17.241596 1052400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 08:00:17.350053 1052400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:00:17.350150 1052400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:00:17.380586 1052400 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 08:00:17.380620 1052400 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 08:00:17.380626 1052400 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 08:00:17.380634 1052400 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 08:00:17.380638 1052400 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 08:00:17.380641 1052400 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 08:00:17.380645 1052400 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 08:00:17.380648 1052400 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 08:00:17.380652 1052400 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 08:00:17.380659 1052400 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 08:00:17.380673 1052400 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 08:00:17.380678 1052400 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 08:00:17.380689 1052400 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 08:00:17.380696 1052400 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 08:00:17.380699 1052400 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 08:00:17.380705 1052400 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 08:00:17.380708 1052400 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 08:00:17.380711 1052400 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 08:00:17.380714 1052400 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 08:00:17.380717 1052400 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 08:00:17.380722 1052400 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 08:00:17.380725 1052400 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 08:00:17.380728 1052400 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 08:00:17.380732 1052400 cri.go:89] found id: ""
	I1123 08:00:17.380790 1052400 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:00:17.395538 1052400 out.go:203] 
	W1123 08:00:17.398380 1052400 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:00:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:00:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:00:17.398405 1052400 out.go:285] * 
	* 
	W1123 08:00:17.406575 1052400 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:00:17.409696 1052400 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (56.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-782760 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-782760 --alsologtostderr -v=1: exit status 11 (302.301356ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:59:20.588346 1050622 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:59:20.592388 1050622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:20.592457 1050622 out.go:374] Setting ErrFile to fd 2...
	I1123 07:59:20.592479 1050622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:20.592782 1050622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 07:59:20.593125 1050622 mustload.go:66] Loading cluster: addons-782760
	I1123 07:59:20.593568 1050622 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:20.593611 1050622 addons.go:622] checking whether the cluster is paused
	I1123 07:59:20.593754 1050622 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:20.593788 1050622 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:59:20.594362 1050622 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:59:20.614010 1050622 ssh_runner.go:195] Run: systemctl --version
	I1123 07:59:20.614061 1050622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:59:20.632761 1050622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:59:20.743148 1050622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:59:20.743329 1050622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:59:20.779364 1050622 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 07:59:20.779688 1050622 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 07:59:20.779715 1050622 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 07:59:20.779721 1050622 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 07:59:20.779725 1050622 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 07:59:20.779729 1050622 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 07:59:20.779732 1050622 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 07:59:20.779736 1050622 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 07:59:20.779739 1050622 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 07:59:20.779745 1050622 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 07:59:20.779749 1050622 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 07:59:20.779752 1050622 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 07:59:20.779756 1050622 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 07:59:20.779759 1050622 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 07:59:20.779763 1050622 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 07:59:20.779768 1050622 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 07:59:20.779775 1050622 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 07:59:20.779779 1050622 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 07:59:20.779782 1050622 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 07:59:20.779785 1050622 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 07:59:20.779794 1050622 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 07:59:20.779800 1050622 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 07:59:20.779803 1050622 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 07:59:20.779806 1050622 cri.go:89] found id: ""
	I1123 07:59:20.779858 1050622 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:59:20.802796 1050622 out.go:203] 
	W1123 07:59:20.805798 1050622 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:59:20.805828 1050622 out.go:285] * 
	* 
	W1123 07:59:20.813992 1050622 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:59:20.817231 1050622 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-782760 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-782760
helpers_test.go:243: (dbg) docker inspect addons-782760:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185",
	        "Created": "2025-11-23T07:56:27.962209564Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1044325,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T07:56:28.049584421Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185/hosts",
	        "LogPath": "/var/lib/docker/containers/3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185/3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185-json.log",
	        "Name": "/addons-782760",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-782760:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-782760",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185",
	                "LowerDir": "/var/lib/docker/overlay2/1179f3f67fd1d0ccdebabebf16620c73061bc6ae405115f7f375f734b6a4e83d-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1179f3f67fd1d0ccdebabebf16620c73061bc6ae405115f7f375f734b6a4e83d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1179f3f67fd1d0ccdebabebf16620c73061bc6ae405115f7f375f734b6a4e83d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1179f3f67fd1d0ccdebabebf16620c73061bc6ae405115f7f375f734b6a4e83d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-782760",
	                "Source": "/var/lib/docker/volumes/addons-782760/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-782760",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-782760",
	                "name.minikube.sigs.k8s.io": "addons-782760",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c9493e00d837ec3222fe395926668105071dfbc85dde8c905b0a3cbd0e3b56b8",
	            "SandboxKey": "/var/run/docker/netns/c9493e00d837",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34227"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34228"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34231"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34229"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34230"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-782760": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:34:3e:80:d9:15",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "65d6399dab490bbc161a2856eb90bdfcc5a05536af204af8a801042873393672",
	                    "EndpointID": "fb457c8d45285dd9dc8de0ea58cbf0d751c663419b15d108372decc912d3c13b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-782760",
	                        "3e0fb2f2cb2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-782760 -n addons-782760
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-782760 logs -n 25: (1.520758662s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-833751 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-833751   │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ delete  │ -p download-only-833751                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-833751   │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ start   │ -o=json --download-only -p download-only-540328 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-540328   │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │ 23 Nov 25 07:56 UTC │
	│ delete  │ -p download-only-540328                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-540328   │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │ 23 Nov 25 07:56 UTC │
	│ delete  │ -p download-only-833751                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-833751   │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │ 23 Nov 25 07:56 UTC │
	│ delete  │ -p download-only-540328                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-540328   │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │ 23 Nov 25 07:56 UTC │
	│ start   │ --download-only -p download-docker-178439 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-178439 │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │                     │
	│ delete  │ -p download-docker-178439                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-178439 │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │ 23 Nov 25 07:56 UTC │
	│ start   │ --download-only -p binary-mirror-804601 --alsologtostderr --binary-mirror http://127.0.0.1:32857 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-804601   │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │                     │
	│ delete  │ -p binary-mirror-804601                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-804601   │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │ 23 Nov 25 07:56 UTC │
	│ addons  │ enable dashboard -p addons-782760                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-782760                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │                     │
	│ start   │ -p addons-782760 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:56 UTC │ 23 Nov 25 07:58 UTC │
	│ addons  │ addons-782760 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:58 UTC │                     │
	│ addons  │ addons-782760 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ addons  │ addons-782760 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ addons  │ addons-782760 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ ip      │ addons-782760 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │ 23 Nov 25 07:59 UTC │
	│ addons  │ addons-782760 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ ssh     │ addons-782760 ssh cat /opt/local-path-provisioner/pvc-4edddf59-348f-4660-91bb-3a71fe1ac723_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │ 23 Nov 25 07:59 UTC │
	│ addons  │ addons-782760 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ addons  │ enable headlamp -p addons-782760 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	│ addons  │ addons-782760 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-782760          │ jenkins │ v1.37.0 │ 23 Nov 25 07:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 07:56:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 07:56:03.114980 1043921 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:56:03.115533 1043921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:56:03.115554 1043921 out.go:374] Setting ErrFile to fd 2...
	I1123 07:56:03.115561 1043921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:56:03.115993 1043921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 07:56:03.116675 1043921 out.go:368] Setting JSON to false
	I1123 07:56:03.117679 1043921 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":31108,"bootTime":1763853455,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 07:56:03.117794 1043921 start.go:143] virtualization:  
	I1123 07:56:03.121126 1043921 out.go:179] * [addons-782760] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 07:56:03.124982 1043921 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 07:56:03.125070 1043921 notify.go:221] Checking for updates...
	I1123 07:56:03.130967 1043921 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 07:56:03.133919 1043921 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 07:56:03.136905 1043921 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 07:56:03.139934 1043921 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 07:56:03.142927 1043921 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 07:56:03.146074 1043921 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 07:56:03.176755 1043921 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 07:56:03.176874 1043921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:56:03.229560 1043921 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 07:56:03.220292818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 07:56:03.229700 1043921 docker.go:319] overlay module found
	I1123 07:56:03.232935 1043921 out.go:179] * Using the docker driver based on user configuration
	I1123 07:56:03.235763 1043921 start.go:309] selected driver: docker
	I1123 07:56:03.235785 1043921 start.go:927] validating driver "docker" against <nil>
	I1123 07:56:03.235799 1043921 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 07:56:03.236606 1043921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:56:03.292154 1043921 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 07:56:03.283293618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 07:56:03.292306 1043921 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 07:56:03.292533 1043921 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 07:56:03.295391 1043921 out.go:179] * Using Docker driver with root privileges
	I1123 07:56:03.298217 1043921 cni.go:84] Creating CNI manager for ""
	I1123 07:56:03.298301 1043921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:56:03.298316 1043921 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 07:56:03.298405 1043921 start.go:353] cluster config:
	{Name:addons-782760 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-782760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1123 07:56:03.301465 1043921 out.go:179] * Starting "addons-782760" primary control-plane node in "addons-782760" cluster
	I1123 07:56:03.304249 1043921 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 07:56:03.307256 1043921 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 07:56:03.310111 1043921 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:56:03.310161 1043921 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 07:56:03.310174 1043921 cache.go:65] Caching tarball of preloaded images
	I1123 07:56:03.310191 1043921 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 07:56:03.310271 1043921 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 07:56:03.310281 1043921 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 07:56:03.310628 1043921 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/config.json ...
	I1123 07:56:03.310648 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/config.json: {Name:mkba92e87d8837cd4e3d5581be5a67ad0a2c349b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:03.326180 1043921 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 07:56:03.326317 1043921 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 07:56:03.326339 1043921 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 07:56:03.326344 1043921 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 07:56:03.326356 1043921 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 07:56:03.326361 1043921 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1123 07:56:20.838645 1043921 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1123 07:56:20.838685 1043921 cache.go:243] Successfully downloaded all kic artifacts
	I1123 07:56:20.838724 1043921 start.go:360] acquireMachinesLock for addons-782760: {Name:mkbe72898b248d290d2a77e20e593673429036d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 07:56:20.838853 1043921 start.go:364] duration metric: took 92.051µs to acquireMachinesLock for "addons-782760"
	I1123 07:56:20.838883 1043921 start.go:93] Provisioning new machine with config: &{Name:addons-782760 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-782760 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 07:56:20.838958 1043921 start.go:125] createHost starting for "" (driver="docker")
	I1123 07:56:20.842374 1043921 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1123 07:56:20.842633 1043921 start.go:159] libmachine.API.Create for "addons-782760" (driver="docker")
	I1123 07:56:20.842674 1043921 client.go:173] LocalClient.Create starting
	I1123 07:56:20.842822 1043921 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem
	I1123 07:56:20.953318 1043921 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem
	I1123 07:56:21.301270 1043921 cli_runner.go:164] Run: docker network inspect addons-782760 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 07:56:21.316967 1043921 cli_runner.go:211] docker network inspect addons-782760 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 07:56:21.317051 1043921 network_create.go:284] running [docker network inspect addons-782760] to gather additional debugging logs...
	I1123 07:56:21.317076 1043921 cli_runner.go:164] Run: docker network inspect addons-782760
	W1123 07:56:21.332834 1043921 cli_runner.go:211] docker network inspect addons-782760 returned with exit code 1
	I1123 07:56:21.332867 1043921 network_create.go:287] error running [docker network inspect addons-782760]: docker network inspect addons-782760: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-782760 not found
	I1123 07:56:21.332882 1043921 network_create.go:289] output of [docker network inspect addons-782760]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-782760 not found
	
	** /stderr **
	I1123 07:56:21.332986 1043921 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 07:56:21.348348 1043921 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b12090}
	I1123 07:56:21.348389 1043921 network_create.go:124] attempt to create docker network addons-782760 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1123 07:56:21.348448 1043921 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-782760 addons-782760
	I1123 07:56:21.417389 1043921 network_create.go:108] docker network addons-782760 192.168.49.0/24 created
	I1123 07:56:21.417422 1043921 kic.go:121] calculated static IP "192.168.49.2" for the "addons-782760" container
	I1123 07:56:21.417505 1043921 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 07:56:21.435433 1043921 cli_runner.go:164] Run: docker volume create addons-782760 --label name.minikube.sigs.k8s.io=addons-782760 --label created_by.minikube.sigs.k8s.io=true
	I1123 07:56:21.457235 1043921 oci.go:103] Successfully created a docker volume addons-782760
	I1123 07:56:21.457321 1043921 cli_runner.go:164] Run: docker run --rm --name addons-782760-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-782760 --entrypoint /usr/bin/test -v addons-782760:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 07:56:23.506471 1043921 cli_runner.go:217] Completed: docker run --rm --name addons-782760-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-782760 --entrypoint /usr/bin/test -v addons-782760:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.049101668s)
	I1123 07:56:23.506516 1043921 oci.go:107] Successfully prepared a docker volume addons-782760
	I1123 07:56:23.506558 1043921 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:56:23.506568 1043921 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 07:56:23.506630 1043921 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-782760:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 07:56:27.898734 1043921 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-782760:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.392064261s)
	I1123 07:56:27.898782 1043921 kic.go:203] duration metric: took 4.392210144s to extract preloaded images to volume ...
	W1123 07:56:27.898915 1043921 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 07:56:27.899023 1043921 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 07:56:27.948254 1043921 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-782760 --name addons-782760 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-782760 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-782760 --network addons-782760 --ip 192.168.49.2 --volume addons-782760:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 07:56:28.253469 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Running}}
	I1123 07:56:28.271338 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:28.293044 1043921 cli_runner.go:164] Run: docker exec addons-782760 stat /var/lib/dpkg/alternatives/iptables
	I1123 07:56:28.347618 1043921 oci.go:144] the created container "addons-782760" has a running status.
	I1123 07:56:28.347652 1043921 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa...
	I1123 07:56:29.017865 1043921 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 07:56:29.036094 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:29.051418 1043921 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 07:56:29.051452 1043921 kic_runner.go:114] Args: [docker exec --privileged addons-782760 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 07:56:29.091781 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:29.107945 1043921 machine.go:94] provisionDockerMachine start ...
	I1123 07:56:29.108039 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:29.124681 1043921 main.go:143] libmachine: Using SSH client type: native
	I1123 07:56:29.125000 1043921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1123 07:56:29.125013 1043921 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 07:56:29.125690 1043921 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 07:56:32.274456 1043921 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-782760
	
	I1123 07:56:32.274481 1043921 ubuntu.go:182] provisioning hostname "addons-782760"
	I1123 07:56:32.274546 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:32.293882 1043921 main.go:143] libmachine: Using SSH client type: native
	I1123 07:56:32.294208 1043921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1123 07:56:32.294224 1043921 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-782760 && echo "addons-782760" | sudo tee /etc/hostname
	I1123 07:56:32.452638 1043921 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-782760
	
	I1123 07:56:32.452723 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:32.469284 1043921 main.go:143] libmachine: Using SSH client type: native
	I1123 07:56:32.469632 1043921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1123 07:56:32.469656 1043921 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-782760' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-782760/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-782760' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 07:56:32.619222 1043921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 07:56:32.619245 1043921 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 07:56:32.619276 1043921 ubuntu.go:190] setting up certificates
	I1123 07:56:32.619285 1043921 provision.go:84] configureAuth start
	I1123 07:56:32.619344 1043921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-782760
	I1123 07:56:32.636466 1043921 provision.go:143] copyHostCerts
	I1123 07:56:32.636555 1043921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 07:56:32.636691 1043921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 07:56:32.636765 1043921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 07:56:32.636830 1043921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.addons-782760 san=[127.0.0.1 192.168.49.2 addons-782760 localhost minikube]
	I1123 07:56:32.710139 1043921 provision.go:177] copyRemoteCerts
	I1123 07:56:32.710204 1043921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 07:56:32.710242 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:32.725949 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:32.830768 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 07:56:32.847702 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 07:56:32.864986 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 07:56:32.881891 1043921 provision.go:87] duration metric: took 262.579104ms to configureAuth
	I1123 07:56:32.881919 1043921 ubuntu.go:206] setting minikube options for container-runtime
	I1123 07:56:32.882144 1043921 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:56:32.882265 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:32.900974 1043921 main.go:143] libmachine: Using SSH client type: native
	I1123 07:56:32.901300 1043921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34227 <nil> <nil>}
	I1123 07:56:32.901319 1043921 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 07:56:33.196247 1043921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 07:56:33.196285 1043921 machine.go:97] duration metric: took 4.088302801s to provisionDockerMachine
	I1123 07:56:33.196295 1043921 client.go:176] duration metric: took 12.353611625s to LocalClient.Create
	I1123 07:56:33.196318 1043921 start.go:167] duration metric: took 12.353678684s to libmachine.API.Create "addons-782760"
	I1123 07:56:33.196328 1043921 start.go:293] postStartSetup for "addons-782760" (driver="docker")
	I1123 07:56:33.196338 1043921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 07:56:33.196410 1043921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 07:56:33.196468 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:33.213220 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:33.318935 1043921 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 07:56:33.322075 1043921 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 07:56:33.322106 1043921 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 07:56:33.322118 1043921 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 07:56:33.322180 1043921 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 07:56:33.322206 1043921 start.go:296] duration metric: took 125.872398ms for postStartSetup
	I1123 07:56:33.322514 1043921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-782760
	I1123 07:56:33.338303 1043921 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/config.json ...
	I1123 07:56:33.338592 1043921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 07:56:33.338642 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:33.355626 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:33.456033 1043921 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 07:56:33.460611 1043921 start.go:128] duration metric: took 12.621639337s to createHost
	I1123 07:56:33.460678 1043921 start.go:83] releasing machines lock for "addons-782760", held for 12.621810114s
	I1123 07:56:33.460770 1043921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-782760
	I1123 07:56:33.476965 1043921 ssh_runner.go:195] Run: cat /version.json
	I1123 07:56:33.477007 1043921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 07:56:33.477014 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:33.477058 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:33.496736 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:33.498504 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:33.686990 1043921 ssh_runner.go:195] Run: systemctl --version
	I1123 07:56:33.692938 1043921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 07:56:33.726669 1043921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 07:56:33.730740 1043921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 07:56:33.730859 1043921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 07:56:33.757871 1043921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 07:56:33.757895 1043921 start.go:496] detecting cgroup driver to use...
	I1123 07:56:33.757926 1043921 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 07:56:33.757991 1043921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 07:56:33.773620 1043921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 07:56:33.785867 1043921 docker.go:218] disabling cri-docker service (if available) ...
	I1123 07:56:33.785938 1043921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 07:56:33.804563 1043921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 07:56:33.824032 1043921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 07:56:33.951213 1043921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 07:56:34.084749 1043921 docker.go:234] disabling docker service ...
	I1123 07:56:34.084867 1043921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 07:56:34.106694 1043921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 07:56:34.120039 1043921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 07:56:34.239852 1043921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 07:56:34.349893 1043921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 07:56:34.363689 1043921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 07:56:34.378257 1043921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 07:56:34.378360 1043921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.393557 1043921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 07:56:34.393643 1043921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.402830 1043921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.411906 1043921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.420533 1043921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 07:56:34.428687 1043921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.437290 1043921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.450409 1043921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:56:34.459977 1043921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 07:56:34.468697 1043921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 07:56:34.476335 1043921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 07:56:34.581247 1043921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 07:56:34.741518 1043921 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 07:56:34.741647 1043921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 07:56:34.745336 1043921 start.go:564] Will wait 60s for crictl version
	I1123 07:56:34.745443 1043921 ssh_runner.go:195] Run: which crictl
	I1123 07:56:34.749001 1043921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 07:56:34.776328 1043921 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 07:56:34.776484 1043921 ssh_runner.go:195] Run: crio --version
	I1123 07:56:34.804070 1043921 ssh_runner.go:195] Run: crio --version
	I1123 07:56:34.833325 1043921 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 07:56:34.836114 1043921 cli_runner.go:164] Run: docker network inspect addons-782760 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 07:56:34.851962 1043921 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 07:56:34.855507 1043921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 07:56:34.864631 1043921 kubeadm.go:884] updating cluster {Name:addons-782760 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-782760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 07:56:34.864753 1043921 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:56:34.864813 1043921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 07:56:34.904086 1043921 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 07:56:34.904110 1043921 crio.go:433] Images already preloaded, skipping extraction
	I1123 07:56:34.904168 1043921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 07:56:34.929073 1043921 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 07:56:34.929108 1043921 cache_images.go:86] Images are preloaded, skipping loading
	I1123 07:56:34.929116 1043921 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 07:56:34.929217 1043921 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-782760 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-782760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 07:56:34.929305 1043921 ssh_runner.go:195] Run: crio config
	I1123 07:56:34.980976 1043921 cni.go:84] Creating CNI manager for ""
	I1123 07:56:34.981000 1043921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:56:34.981015 1043921 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 07:56:34.981068 1043921 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-782760 NodeName:addons-782760 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 07:56:34.981209 1043921 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-782760"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 07:56:34.981286 1043921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 07:56:34.988859 1043921 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 07:56:34.988936 1043921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 07:56:34.996156 1043921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 07:56:35.012888 1043921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 07:56:35.026835 1043921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1123 07:56:35.039308 1043921 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 07:56:35.042791 1043921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 07:56:35.051959 1043921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 07:56:35.159384 1043921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 07:56:35.174884 1043921 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760 for IP: 192.168.49.2
	I1123 07:56:35.174906 1043921 certs.go:195] generating shared ca certs ...
	I1123 07:56:35.174923 1043921 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:35.175132 1043921 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 07:56:35.444177 1043921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt ...
	I1123 07:56:35.444210 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt: {Name:mk8146c5b7a605f779e320eb84a5cb2ea564082b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:35.444448 1043921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key ...
	I1123 07:56:35.444465 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key: {Name:mk26f3ffa20a6bcc50ae913917776508521cc9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:35.444589 1043921 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 07:56:35.756453 1043921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt ...
	I1123 07:56:35.756482 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt: {Name:mk1f883bd52c353a0d324bd09106e5a1dc14c56c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:35.756659 1043921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key ...
	I1123 07:56:35.756672 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key: {Name:mke4c88318427a2ef42dd51a08bdffba43aefe94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:35.756753 1043921 certs.go:257] generating profile certs ...
	I1123 07:56:35.756822 1043921 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.key
	I1123 07:56:35.756838 1043921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt with IP's: []
	I1123 07:56:36.045925 1043921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt ...
	I1123 07:56:36.045969 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: {Name:mk7dc761132cd3836da2c08a7038d07c60f4df22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:36.046163 1043921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.key ...
	I1123 07:56:36.046176 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.key: {Name:mka496c926ea8fd6d350fdb7fa6c05066bc5e55d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:36.046262 1043921 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.key.5e94d694
	I1123 07:56:36.046283 1043921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.crt.5e94d694 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1123 07:56:36.205078 1043921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.crt.5e94d694 ...
	I1123 07:56:36.205109 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.crt.5e94d694: {Name:mk8a13a122175f0ddb1281d41cffd2c533aaf4b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:36.205294 1043921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.key.5e94d694 ...
	I1123 07:56:36.205311 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.key.5e94d694: {Name:mk8bba5e6439cf832dddfbbf160c0063b04ad5f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:36.205410 1043921 certs.go:382] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.crt.5e94d694 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.crt
	I1123 07:56:36.205523 1043921 certs.go:386] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.key.5e94d694 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.key
	I1123 07:56:36.205580 1043921 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.key
	I1123 07:56:36.205600 1043921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.crt with IP's: []
	I1123 07:56:36.275741 1043921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.crt ...
	I1123 07:56:36.275770 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.crt: {Name:mka7718c2545da001702f275c6eea0267d39520a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:36.275939 1043921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.key ...
	I1123 07:56:36.275951 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.key: {Name:mkdccc58ebfd80aafabea53e5a76b2198b113569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:36.276137 1043921 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 07:56:36.276179 1043921 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 07:56:36.276208 1043921 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 07:56:36.276243 1043921 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 07:56:36.276779 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 07:56:36.295448 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 07:56:36.313203 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 07:56:36.330320 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 07:56:36.347226 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 07:56:36.364044 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 07:56:36.380872 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 07:56:36.398081 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 07:56:36.415480 1043921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 07:56:36.432775 1043921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 07:56:36.445691 1043921 ssh_runner.go:195] Run: openssl version
	I1123 07:56:36.451981 1043921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 07:56:36.460479 1043921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 07:56:36.464420 1043921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 07:56:36.464540 1043921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 07:56:36.505058 1043921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 07:56:36.513709 1043921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 07:56:36.517296 1043921 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 07:56:36.517349 1043921 kubeadm.go:401] StartCluster: {Name:addons-782760 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-782760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 07:56:36.517435 1043921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:56:36.517495 1043921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:56:36.543266 1043921 cri.go:89] found id: ""
	I1123 07:56:36.543337 1043921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 07:56:36.550854 1043921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 07:56:36.558218 1043921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 07:56:36.558325 1043921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 07:56:36.565691 1043921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 07:56:36.565710 1043921 kubeadm.go:158] found existing configuration files:
	
	I1123 07:56:36.565759 1043921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 07:56:36.573169 1043921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 07:56:36.573262 1043921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 07:56:36.580328 1043921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 07:56:36.588003 1043921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 07:56:36.588173 1043921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 07:56:36.595677 1043921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 07:56:36.602945 1043921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 07:56:36.603058 1043921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 07:56:36.610552 1043921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 07:56:36.617998 1043921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 07:56:36.618122 1043921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 07:56:36.626644 1043921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 07:56:36.679608 1043921 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 07:56:36.679669 1043921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 07:56:36.703818 1043921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 07:56:36.703898 1043921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 07:56:36.703938 1043921 kubeadm.go:319] OS: Linux
	I1123 07:56:36.703988 1043921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 07:56:36.704041 1043921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 07:56:36.704098 1043921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 07:56:36.704151 1043921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 07:56:36.704203 1043921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 07:56:36.704262 1043921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 07:56:36.704312 1043921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 07:56:36.704364 1043921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 07:56:36.704415 1043921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 07:56:36.770028 1043921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 07:56:36.770235 1043921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 07:56:36.770375 1043921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 07:56:36.777168 1043921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 07:56:36.783796 1043921 out.go:252]   - Generating certificates and keys ...
	I1123 07:56:36.783902 1043921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 07:56:36.783974 1043921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 07:56:38.594555 1043921 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 07:56:39.216478 1043921 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 07:56:39.555558 1043921 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 07:56:39.941644 1043921 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 07:56:40.768461 1043921 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 07:56:40.768639 1043921 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-782760 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 07:56:41.113559 1043921 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 07:56:41.113914 1043921 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-782760 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 07:56:41.500982 1043921 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 07:56:41.616936 1043921 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 07:56:42.411413 1043921 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 07:56:42.411714 1043921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 07:56:42.558575 1043921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 07:56:42.996050 1043921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 07:56:43.282564 1043921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 07:56:44.031592 1043921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 07:56:44.883966 1043921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 07:56:44.884921 1043921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 07:56:44.888862 1043921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 07:56:44.892254 1043921 out.go:252]   - Booting up control plane ...
	I1123 07:56:44.892365 1043921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 07:56:44.892443 1043921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 07:56:44.893349 1043921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 07:56:44.912506 1043921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 07:56:44.912880 1043921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 07:56:44.920000 1043921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 07:56:44.920316 1043921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 07:56:44.920570 1043921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 07:56:45.059289 1043921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 07:56:45.059435 1043921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 07:56:46.560444 1043921 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501700148s
	I1123 07:56:46.571397 1043921 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 07:56:46.571494 1043921 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1123 07:56:46.571583 1043921 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 07:56:46.571668 1043921 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 07:56:49.755635 1043921 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.183997939s
	I1123 07:56:51.224076 1043921 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.652670518s
	I1123 07:56:52.573664 1043921 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002063363s
	I1123 07:56:52.594117 1043921 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 07:56:52.607806 1043921 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 07:56:52.620157 1043921 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 07:56:52.620364 1043921 kubeadm.go:319] [mark-control-plane] Marking the node addons-782760 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 07:56:52.632440 1043921 kubeadm.go:319] [bootstrap-token] Using token: 1t27ze.71y3zo3jsbxnoaq7
	I1123 07:56:52.637420 1043921 out.go:252]   - Configuring RBAC rules ...
	I1123 07:56:52.637550 1043921 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 07:56:52.641780 1043921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 07:56:52.649281 1043921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 07:56:52.652947 1043921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 07:56:52.657121 1043921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 07:56:52.663308 1043921 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 07:56:52.981051 1043921 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 07:56:53.413290 1043921 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 07:56:53.980493 1043921 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 07:56:53.981488 1043921 kubeadm.go:319] 
	I1123 07:56:53.981557 1043921 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 07:56:53.981562 1043921 kubeadm.go:319] 
	I1123 07:56:53.981639 1043921 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 07:56:53.981643 1043921 kubeadm.go:319] 
	I1123 07:56:53.981667 1043921 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 07:56:53.981726 1043921 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 07:56:53.981777 1043921 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 07:56:53.981780 1043921 kubeadm.go:319] 
	I1123 07:56:53.981834 1043921 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 07:56:53.981838 1043921 kubeadm.go:319] 
	I1123 07:56:53.981886 1043921 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 07:56:53.981891 1043921 kubeadm.go:319] 
	I1123 07:56:53.981943 1043921 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 07:56:53.982018 1043921 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 07:56:53.982086 1043921 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 07:56:53.982090 1043921 kubeadm.go:319] 
	I1123 07:56:53.982184 1043921 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 07:56:53.982273 1043921 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 07:56:53.982278 1043921 kubeadm.go:319] 
	I1123 07:56:53.982361 1043921 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1t27ze.71y3zo3jsbxnoaq7 \
	I1123 07:56:53.982464 1043921 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb \
	I1123 07:56:53.982484 1043921 kubeadm.go:319] 	--control-plane 
	I1123 07:56:53.982488 1043921 kubeadm.go:319] 
	I1123 07:56:53.982572 1043921 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 07:56:53.982576 1043921 kubeadm.go:319] 
	I1123 07:56:53.982658 1043921 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1t27ze.71y3zo3jsbxnoaq7 \
	I1123 07:56:53.983019 1043921 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb 
	I1123 07:56:53.986842 1043921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 07:56:53.987075 1043921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 07:56:53.987207 1043921 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 07:56:53.987220 1043921 cni.go:84] Creating CNI manager for ""
	I1123 07:56:53.987227 1043921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:56:53.992309 1043921 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 07:56:53.995328 1043921 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 07:56:53.999064 1043921 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 07:56:53.999082 1043921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 07:56:54.015309 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 07:56:54.284246 1043921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 07:56:54.284378 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:54.284472 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-782760 minikube.k8s.io/updated_at=2025_11_23T07_56_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=addons-782760 minikube.k8s.io/primary=true
	I1123 07:56:54.300757 1043921 ops.go:34] apiserver oom_adj: -16
	I1123 07:56:54.440806 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:54.940988 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:55.440989 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:55.940943 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:56.441143 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:56.940876 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:57.441705 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:57.941031 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:58.441013 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:58.941144 1043921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:59.042917 1043921 kubeadm.go:1114] duration metric: took 4.758581917s to wait for elevateKubeSystemPrivileges
	I1123 07:56:59.042947 1043921 kubeadm.go:403] duration metric: took 22.525601601s to StartCluster
	I1123 07:56:59.042964 1043921 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:59.043085 1043921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 07:56:59.043492 1043921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:59.043686 1043921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 07:56:59.043706 1043921 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 07:56:59.043988 1043921 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:56:59.044039 1043921 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 07:56:59.044138 1043921 addons.go:70] Setting yakd=true in profile "addons-782760"
	I1123 07:56:59.044152 1043921 addons.go:239] Setting addon yakd=true in "addons-782760"
	I1123 07:56:59.044173 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.044711 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.045092 1043921 addons.go:70] Setting metrics-server=true in profile "addons-782760"
	I1123 07:56:59.045114 1043921 addons.go:239] Setting addon metrics-server=true in "addons-782760"
	I1123 07:56:59.045137 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.045557 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.045685 1043921 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-782760"
	I1123 07:56:59.045697 1043921 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-782760"
	I1123 07:56:59.045715 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.046117 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.048274 1043921 addons.go:70] Setting registry=true in profile "addons-782760"
	I1123 07:56:59.048306 1043921 addons.go:239] Setting addon registry=true in "addons-782760"
	I1123 07:56:59.048457 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.048468 1043921 addons.go:70] Setting cloud-spanner=true in profile "addons-782760"
	I1123 07:56:59.048492 1043921 addons.go:239] Setting addon cloud-spanner=true in "addons-782760"
	I1123 07:56:59.048525 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.049025 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.048453 1043921 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-782760"
	I1123 07:56:59.049232 1043921 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-782760"
	I1123 07:56:59.049257 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.049726 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.052089 1043921 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-782760"
	I1123 07:56:59.052158 1043921 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-782760"
	I1123 07:56:59.052192 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.052693 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.053653 1043921 addons.go:70] Setting registry-creds=true in profile "addons-782760"
	I1123 07:56:59.053675 1043921 addons.go:239] Setting addon registry-creds=true in "addons-782760"
	I1123 07:56:59.053706 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.054279 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.056588 1043921 addons.go:70] Setting default-storageclass=true in profile "addons-782760"
	I1123 07:56:59.056627 1043921 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-782760"
	I1123 07:56:59.056969 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.063908 1043921 addons.go:70] Setting storage-provisioner=true in profile "addons-782760"
	I1123 07:56:59.063948 1043921 addons.go:239] Setting addon storage-provisioner=true in "addons-782760"
	I1123 07:56:59.063982 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.064458 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.066722 1043921 addons.go:70] Setting gcp-auth=true in profile "addons-782760"
	I1123 07:56:59.066759 1043921 mustload.go:66] Loading cluster: addons-782760
	I1123 07:56:59.066992 1043921 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:56:59.067356 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.079370 1043921 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-782760"
	I1123 07:56:59.079418 1043921 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-782760"
	I1123 07:56:59.079901 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.103581 1043921 addons.go:70] Setting ingress=true in profile "addons-782760"
	I1123 07:56:59.103626 1043921 addons.go:239] Setting addon ingress=true in "addons-782760"
	I1123 07:56:59.103675 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.104254 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.112593 1043921 addons.go:70] Setting volcano=true in profile "addons-782760"
	I1123 07:56:59.112638 1043921 addons.go:239] Setting addon volcano=true in "addons-782760"
	I1123 07:56:59.112675 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.114304 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.128263 1043921 addons.go:70] Setting ingress-dns=true in profile "addons-782760"
	I1123 07:56:59.128307 1043921 addons.go:239] Setting addon ingress-dns=true in "addons-782760"
	I1123 07:56:59.128381 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.128940 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.145400 1043921 addons.go:70] Setting volumesnapshots=true in profile "addons-782760"
	I1123 07:56:59.145437 1043921 addons.go:239] Setting addon volumesnapshots=true in "addons-782760"
	I1123 07:56:59.145475 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.145937 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.155564 1043921 addons.go:70] Setting inspektor-gadget=true in profile "addons-782760"
	I1123 07:56:59.155672 1043921 addons.go:239] Setting addon inspektor-gadget=true in "addons-782760"
	I1123 07:56:59.155749 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.156498 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.179228 1043921 out.go:179] * Verifying Kubernetes components...
	I1123 07:56:59.182815 1043921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 07:56:59.184035 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.282310 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 07:56:59.317509 1043921 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1123 07:56:59.326693 1043921 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 07:56:59.326717 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 07:56:59.326795 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.369193 1043921 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 07:56:59.369368 1043921 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 07:56:59.369516 1043921 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 07:56:59.369703 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 07:56:59.381259 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.382782 1043921 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 07:56:59.382845 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 07:56:59.382923 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.386213 1043921 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 07:56:59.386232 1043921 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 07:56:59.386300 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.371527 1043921 addons.go:239] Setting addon default-storageclass=true in "addons-782760"
	I1123 07:56:59.394417 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.394974 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.403672 1043921 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-782760"
	I1123 07:56:59.403767 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:56:59.404288 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:56:59.416140 1043921 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 07:56:59.416307 1043921 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 07:56:59.416472 1043921 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	W1123 07:56:59.392491 1043921 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 07:56:59.392074 1043921 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 07:56:59.428029 1043921 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 07:56:59.428102 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.429590 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 07:56:59.429598 1043921 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 07:56:59.429749 1043921 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 07:56:59.444513 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 07:56:59.444586 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.454966 1043921 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 07:56:59.456941 1043921 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 07:56:59.457227 1043921 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 07:56:59.457272 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 07:56:59.457358 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.441545 1043921 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 07:56:59.470004 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 07:56:59.470079 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.482696 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 07:56:59.483631 1043921 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 07:56:59.483649 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 07:56:59.483749 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.490351 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 07:56:59.494021 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 07:56:59.497188 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 07:56:59.497336 1043921 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 07:56:59.498545 1043921 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 07:56:59.498563 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 07:56:59.498627 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.520724 1043921 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 07:56:59.522793 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 07:56:59.529560 1043921 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 07:56:59.531381 1043921 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 07:56:59.531417 1043921 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 07:56:59.531490 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.547350 1043921 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 07:56:59.557306 1043921 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 07:56:59.558454 1043921 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 07:56:59.559341 1043921 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 07:56:59.559359 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 07:56:59.559510 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.564794 1043921 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 07:56:59.564813 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 07:56:59.564949 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.548262 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 07:56:59.584788 1043921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 07:56:59.587274 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.602981 1043921 out.go:179]   - Using image docker.io/busybox:stable
	I1123 07:56:59.609456 1043921 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 07:56:59.609486 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 07:56:59.609546 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.631057 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.634106 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.634805 1043921 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 07:56:59.634818 1043921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 07:56:59.634948 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:56:59.704609 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.721324 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.742816 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.767819 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.770671 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.794699 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.797100 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.804811 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.822613 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	W1123 07:56:59.827754 1043921 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 07:56:59.827871 1043921 retry.go:31] will retry after 218.284426ms: ssh: handshake failed: EOF
	I1123 07:56:59.837970 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.847888 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	W1123 07:56:59.855465 1043921 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 07:56:59.855564 1043921 retry.go:31] will retry after 171.88141ms: ssh: handshake failed: EOF
	I1123 07:56:59.857529 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.859058 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:56:59.862179 1043921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 07:56:59.862427 1043921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 07:57:00.450521 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 07:57:00.547480 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 07:57:00.588696 1043921 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 07:57:00.588767 1043921 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 07:57:00.620644 1043921 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 07:57:00.620722 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 07:57:00.640541 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 07:57:00.668490 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 07:57:00.675233 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 07:57:00.675604 1043921 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 07:57:00.675653 1043921 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 07:57:00.733635 1043921 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 07:57:00.733714 1043921 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 07:57:00.738056 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 07:57:00.756322 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 07:57:00.765253 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 07:57:00.767441 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 07:57:00.768541 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 07:57:00.768595 1043921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 07:57:00.776349 1043921 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 07:57:00.776420 1043921 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 07:57:00.843963 1043921 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 07:57:00.844039 1043921 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 07:57:00.847004 1043921 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 07:57:00.847071 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 07:57:00.912099 1043921 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 07:57:00.912181 1043921 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 07:57:00.928887 1043921 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 07:57:00.928966 1043921 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 07:57:00.940786 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 07:57:00.978722 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 07:57:00.978811 1043921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 07:57:01.048550 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 07:57:01.052140 1043921 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 07:57:01.052213 1043921 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 07:57:01.067780 1043921 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 07:57:01.067849 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 07:57:01.097329 1043921 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 07:57:01.097362 1043921 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 07:57:01.121860 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 07:57:01.121886 1043921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 07:57:01.178716 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 07:57:01.215248 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 07:57:01.248929 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 07:57:01.249009 1043921 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 07:57:01.289283 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 07:57:01.289365 1043921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 07:57:01.498155 1043921 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 07:57:01.498228 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 07:57:01.537740 1043921 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 07:57:01.537813 1043921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 07:57:01.642949 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 07:57:01.786201 1043921 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.923993159s)
	I1123 07:57:01.786160 1043921 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.923682768s)
	I1123 07:57:01.786380 1043921 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1123 07:57:01.787694 1043921 node_ready.go:35] waiting up to 6m0s for node "addons-782760" to be "Ready" ...
	I1123 07:57:01.801880 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.351251344s)
	I1123 07:57:01.817884 1043921 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 07:57:01.817908 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 07:57:02.046481 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.498891038s)
	I1123 07:57:02.225103 1043921 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 07:57:02.225125 1043921 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 07:57:02.294076 1043921 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-782760" context rescaled to 1 replicas
	I1123 07:57:02.436624 1043921 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 07:57:02.436648 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 07:57:02.633303 1043921 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 07:57:02.633325 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 07:57:02.742067 1043921 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 07:57:02.742090 1043921 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 07:57:03.037795 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1123 07:57:03.800096 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:04.991087 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.350457894s)
	I1123 07:57:04.991118 1043921 addons.go:495] Verifying addon ingress=true in "addons-782760"
	I1123 07:57:04.991325 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.32274187s)
	I1123 07:57:04.991405 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.316099976s)
	I1123 07:57:04.991432 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.253311428s)
	I1123 07:57:04.991509 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.235119054s)
	I1123 07:57:04.991617 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.224110819s)
	I1123 07:57:04.991645 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.05079972s)
	I1123 07:57:04.991843 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.943229699s)
	I1123 07:57:04.991862 1043921 addons.go:495] Verifying addon registry=true in "addons-782760"
	I1123 07:57:04.991954 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.22623811s)
	I1123 07:57:04.992114 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.813376057s)
	I1123 07:57:04.992128 1043921 addons.go:495] Verifying addon metrics-server=true in "addons-782760"
	I1123 07:57:04.992265 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.776941824s)
	I1123 07:57:04.995289 1043921 out.go:179] * Verifying ingress addon...
	I1123 07:57:04.997307 1043921 out.go:179] * Verifying registry addon...
	I1123 07:57:04.997397 1043921 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-782760 service yakd-dashboard -n yakd-dashboard
	
	I1123 07:57:04.999970 1043921 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1123 07:57:05.001838 1043921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 07:57:05.053636 1043921 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 07:57:05.053701 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:05.054005 1043921 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 07:57:05.054045 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:57:05.069493 1043921 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1123 07:57:05.171825 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.528793554s)
	W1123 07:57:05.171859 1043921 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 07:57:05.171879 1043921 retry.go:31] will retry after 127.667831ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 07:57:05.299694 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 07:57:05.510580 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:05.511011 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:05.755978 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.718136521s)
	I1123 07:57:05.756051 1043921 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-782760"
	I1123 07:57:05.759277 1043921 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 07:57:05.762874 1043921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 07:57:05.768093 1043921 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 07:57:05.768153 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:06.008555 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:06.011387 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:06.266012 1043921 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 07:57:06.266036 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:06.291136 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:06.504254 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:06.505412 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:06.766878 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:06.992164 1043921 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 07:57:06.992267 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:57:07.008870 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:07.008934 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:07.013494 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:57:07.132846 1043921 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 07:57:07.145411 1043921 addons.go:239] Setting addon gcp-auth=true in "addons-782760"
	I1123 07:57:07.145459 1043921 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:57:07.145933 1043921 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:57:07.162851 1043921 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 07:57:07.162903 1043921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:57:07.180029 1043921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:57:07.266393 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:07.503360 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:07.505269 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:07.766092 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:08.005617 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:08.008574 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:08.144061 1043921 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 07:57:08.144175 1043921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.844392662s)
	I1123 07:57:08.149881 1043921 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 07:57:08.152773 1043921 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 07:57:08.152799 1043921 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 07:57:08.166304 1043921 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 07:57:08.166330 1043921 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 07:57:08.182113 1043921 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 07:57:08.182136 1043921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 07:57:08.195002 1043921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 07:57:08.266904 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:08.292081 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:08.505583 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:08.506738 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:08.684026 1043921 addons.go:495] Verifying addon gcp-auth=true in "addons-782760"
	I1123 07:57:08.686688 1043921 out.go:179] * Verifying gcp-auth addon...
	I1123 07:57:08.689327 1043921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 07:57:08.698242 1043921 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 07:57:08.698268 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:08.796379 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:09.004443 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:09.006276 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:09.192287 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:09.266021 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:09.503528 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:09.505578 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:09.692083 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:09.765781 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:10.007258 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:10.011109 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:10.192695 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:10.266730 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:10.292645 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:10.504082 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:10.504817 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:10.692919 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:10.766708 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:11.005085 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:11.007933 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:11.192961 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:11.265809 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:11.504434 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:11.504633 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:11.693091 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:11.765849 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:12.005227 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:12.008285 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:12.193121 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:12.266173 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:12.503870 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:12.505338 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:12.692096 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:12.765928 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:12.790521 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:13.003599 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:13.006200 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:13.192999 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:13.266532 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:13.503834 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:13.504883 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:13.693057 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:13.765956 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:14.004916 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:14.006871 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:14.192612 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:14.266453 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:14.503956 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:14.504148 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:14.692901 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:14.765837 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:14.790688 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:15.006954 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:15.008291 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:15.192123 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:15.266056 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:15.503204 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:15.505560 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:15.692418 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:15.766065 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:16.008026 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:16.008507 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:16.192216 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:16.265904 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:16.504497 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:16.505209 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:16.692978 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:16.767120 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:16.791046 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:17.003103 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:17.006047 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:17.192955 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:17.265851 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:17.504108 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:17.504331 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:17.692431 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:17.769818 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:18.010590 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:18.011495 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:18.192466 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:18.266402 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:18.505266 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:18.505713 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:18.692488 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:18.766349 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:18.791095 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:19.003291 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:19.005879 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:19.192703 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:19.266545 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:19.504066 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:19.505238 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:19.693227 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:19.765893 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:20.011222 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:20.012050 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:20.193295 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:20.266162 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:20.503951 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:20.505298 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:20.692129 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:20.765779 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:21.005619 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:21.007142 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:21.192140 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:21.265675 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:21.291129 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:21.504762 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:21.506053 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:21.693208 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:21.765998 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:22.006192 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:22.007159 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:22.192898 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:22.265726 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:22.504245 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:22.504987 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:22.692841 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:22.765991 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:23.003364 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:23.006229 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:23.192239 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:23.267088 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:23.291670 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:23.504401 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:23.504601 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:23.692649 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:23.766311 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:24.003080 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:24.007694 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:24.192485 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:24.266218 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:24.503943 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:24.505780 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:24.700212 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:24.765775 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:25.013224 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:25.015280 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:25.193060 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:25.265692 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:25.503901 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:25.504695 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:25.692605 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:25.766200 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:25.791415 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:26.004878 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:26.008420 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:26.192044 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:26.265950 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:26.505520 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:26.505839 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:26.692981 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:26.766981 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:27.005427 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:27.007530 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:27.192724 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:27.266688 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:27.502970 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:27.504768 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:27.692525 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:27.770940 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:28.004937 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:28.007006 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:28.192470 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:28.267912 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:28.290755 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:28.504767 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:28.505196 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:28.693032 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:28.766128 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:29.004052 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:29.006348 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:29.192689 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:29.266881 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:29.503997 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:29.504592 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:29.692519 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:29.766479 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:30.003251 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:30.010371 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:30.193080 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:30.265954 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:30.291420 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:30.503551 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:30.504448 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:30.692499 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:30.765993 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:31.005780 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:31.007975 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:31.193145 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:31.265624 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:31.503760 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:31.504230 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:31.691902 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:31.766572 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:32.003553 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:32.009303 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:32.192108 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:32.266836 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:32.503352 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:32.505077 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:32.693126 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:32.765830 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:32.790474 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:33.005642 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:33.005806 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:33.193081 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:33.266298 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:33.505639 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:33.506155 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:33.693110 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:33.765644 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:34.008194 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:34.008309 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:34.192796 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:34.266439 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:34.504170 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:34.504616 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:34.693368 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:34.766034 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:34.790847 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:35.002897 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:35.005434 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:35.192349 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:35.265957 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:35.504190 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:35.505087 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:35.692856 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:35.766894 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:36.007801 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:36.013446 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:36.192168 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:36.265863 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:36.503405 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:36.504591 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:36.692449 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:36.765954 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:37.006607 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:37.008111 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:37.192255 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:37.266199 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 07:57:37.291971 1043921 node_ready.go:57] node "addons-782760" has "Ready":"False" status (will retry)
	I1123 07:57:37.502958 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:37.505208 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:37.693152 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:37.765687 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:38.010386 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:38.012567 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:38.192834 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:38.267005 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:38.503225 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:38.505085 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:38.697455 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:38.766165 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:39.003042 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:39.006289 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:39.192428 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:39.266282 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:39.503288 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:39.505046 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:39.692797 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:39.794000 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:39.817785 1043921 node_ready.go:49] node "addons-782760" is "Ready"
	I1123 07:57:39.817864 1043921 node_ready.go:38] duration metric: took 38.030132168s for node "addons-782760" to be "Ready" ...
	I1123 07:57:39.817891 1043921 api_server.go:52] waiting for apiserver process to appear ...
	I1123 07:57:39.817977 1043921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 07:57:39.846505 1043921 api_server.go:72] duration metric: took 40.802768186s to wait for apiserver process to appear ...
	I1123 07:57:39.846530 1043921 api_server.go:88] waiting for apiserver healthz status ...
	I1123 07:57:39.846548 1043921 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 07:57:39.862981 1043921 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 07:57:39.877943 1043921 api_server.go:141] control plane version: v1.34.1
	I1123 07:57:39.877971 1043921 api_server.go:131] duration metric: took 31.435147ms to wait for apiserver health ...
	I1123 07:57:39.877980 1043921 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 07:57:39.890453 1043921 system_pods.go:59] 19 kube-system pods found
	I1123 07:57:39.890487 1043921 system_pods.go:61] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Pending
	I1123 07:57:39.890493 1043921 system_pods.go:61] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending
	I1123 07:57:39.890497 1043921 system_pods.go:61] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending
	I1123 07:57:39.890501 1043921 system_pods.go:61] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending
	I1123 07:57:39.890505 1043921 system_pods.go:61] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:39.890508 1043921 system_pods.go:61] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:39.890512 1043921 system_pods.go:61] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:39.890515 1043921 system_pods.go:61] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:39.890519 1043921 system_pods.go:61] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending
	I1123 07:57:39.890523 1043921 system_pods.go:61] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:39.890526 1043921 system_pods.go:61] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:39.890531 1043921 system_pods.go:61] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending
	I1123 07:57:39.890539 1043921 system_pods.go:61] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending
	I1123 07:57:39.890548 1043921 system_pods.go:61] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:39.890558 1043921 system_pods.go:61] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending
	I1123 07:57:39.890564 1043921 system_pods.go:61] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending
	I1123 07:57:39.890568 1043921 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending
	I1123 07:57:39.890571 1043921 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending
	I1123 07:57:39.890575 1043921 system_pods.go:61] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Pending
	I1123 07:57:39.890586 1043921 system_pods.go:74] duration metric: took 12.600255ms to wait for pod list to return data ...
	I1123 07:57:39.890593 1043921 default_sa.go:34] waiting for default service account to be created ...
	I1123 07:57:39.904168 1043921 default_sa.go:45] found service account: "default"
	I1123 07:57:39.904195 1043921 default_sa.go:55] duration metric: took 13.596412ms for default service account to be created ...
	I1123 07:57:39.904205 1043921 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 07:57:39.911954 1043921 system_pods.go:86] 19 kube-system pods found
	I1123 07:57:39.911989 1043921 system_pods.go:89] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Pending
	I1123 07:57:39.911996 1043921 system_pods.go:89] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending
	I1123 07:57:39.912000 1043921 system_pods.go:89] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending
	I1123 07:57:39.912003 1043921 system_pods.go:89] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending
	I1123 07:57:39.912007 1043921 system_pods.go:89] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:39.912012 1043921 system_pods.go:89] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:39.912017 1043921 system_pods.go:89] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:39.912021 1043921 system_pods.go:89] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:39.912025 1043921 system_pods.go:89] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending
	I1123 07:57:39.912029 1043921 system_pods.go:89] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:39.912033 1043921 system_pods.go:89] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:39.912041 1043921 system_pods.go:89] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending
	I1123 07:57:39.912046 1043921 system_pods.go:89] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending
	I1123 07:57:39.912055 1043921 system_pods.go:89] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:39.912059 1043921 system_pods.go:89] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending
	I1123 07:57:39.912072 1043921 system_pods.go:89] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending
	I1123 07:57:39.912077 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending
	I1123 07:57:39.912081 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending
	I1123 07:57:39.912091 1043921 system_pods.go:89] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Pending
	I1123 07:57:39.912106 1043921 retry.go:31] will retry after 274.40814ms: missing components: kube-dns
	I1123 07:57:40.013809 1043921 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 07:57:40.013904 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:40.014948 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:40.235451 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:40.241095 1043921 system_pods.go:86] 19 kube-system pods found
	I1123 07:57:40.241182 1043921 system_pods.go:89] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:57:40.241204 1043921 system_pods.go:89] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending
	I1123 07:57:40.241225 1043921 system_pods.go:89] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending
	I1123 07:57:40.241257 1043921 system_pods.go:89] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending
	I1123 07:57:40.241279 1043921 system_pods.go:89] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:40.241299 1043921 system_pods.go:89] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:40.241318 1043921 system_pods.go:89] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:40.241355 1043921 system_pods.go:89] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:40.241374 1043921 system_pods.go:89] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending
	I1123 07:57:40.241392 1043921 system_pods.go:89] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:40.241426 1043921 system_pods.go:89] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:40.241452 1043921 system_pods.go:89] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:57:40.241472 1043921 system_pods.go:89] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending
	I1123 07:57:40.241507 1043921 system_pods.go:89] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:40.241530 1043921 system_pods.go:89] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending
	I1123 07:57:40.241551 1043921 system_pods.go:89] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:57:40.241584 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending
	I1123 07:57:40.241611 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:40.241631 1043921 system_pods.go:89] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Pending
	I1123 07:57:40.241678 1043921 retry.go:31] will retry after 358.244102ms: missing components: kube-dns
	I1123 07:57:40.274512 1043921 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 07:57:40.274583 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:40.508900 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:40.509005 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:40.611043 1043921 system_pods.go:86] 19 kube-system pods found
	I1123 07:57:40.611128 1043921 system_pods.go:89] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:57:40.611153 1043921 system_pods.go:89] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 07:57:40.611203 1043921 system_pods.go:89] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 07:57:40.611231 1043921 system_pods.go:89] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 07:57:40.611251 1043921 system_pods.go:89] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:40.611285 1043921 system_pods.go:89] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:40.611305 1043921 system_pods.go:89] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:40.611323 1043921 system_pods.go:89] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:40.611346 1043921 system_pods.go:89] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:57:40.611380 1043921 system_pods.go:89] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:40.611398 1043921 system_pods.go:89] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:40.611419 1043921 system_pods.go:89] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:57:40.611454 1043921 system_pods.go:89] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 07:57:40.611480 1043921 system_pods.go:89] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:40.611502 1043921 system_pods.go:89] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:57:40.611538 1043921 system_pods.go:89] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:57:40.611561 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:40.611585 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:40.611620 1043921 system_pods.go:89] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 07:57:40.611741 1043921 retry.go:31] will retry after 397.988495ms: missing components: kube-dns
	I1123 07:57:40.710088 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:40.811710 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:41.009441 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:41.009848 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:41.024021 1043921 system_pods.go:86] 19 kube-system pods found
	I1123 07:57:41.024102 1043921 system_pods.go:89] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:57:41.024126 1043921 system_pods.go:89] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 07:57:41.024164 1043921 system_pods.go:89] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 07:57:41.024190 1043921 system_pods.go:89] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 07:57:41.024210 1043921 system_pods.go:89] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:41.024244 1043921 system_pods.go:89] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:41.024267 1043921 system_pods.go:89] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:41.024285 1043921 system_pods.go:89] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:41.024322 1043921 system_pods.go:89] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:57:41.024343 1043921 system_pods.go:89] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:41.024362 1043921 system_pods.go:89] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:41.024397 1043921 system_pods.go:89] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:57:41.024422 1043921 system_pods.go:89] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 07:57:41.024444 1043921 system_pods.go:89] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:41.024478 1043921 system_pods.go:89] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:57:41.024503 1043921 system_pods.go:89] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:57:41.024524 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:41.024561 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:41.024586 1043921 system_pods.go:89] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 07:57:41.024618 1043921 retry.go:31] will retry after 480.908132ms: missing components: kube-dns
	I1123 07:57:41.192539 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:41.266524 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:41.503954 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:41.506405 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:41.510359 1043921 system_pods.go:86] 19 kube-system pods found
	I1123 07:57:41.510445 1043921 system_pods.go:89] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:57:41.510471 1043921 system_pods.go:89] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 07:57:41.510509 1043921 system_pods.go:89] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 07:57:41.510537 1043921 system_pods.go:89] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 07:57:41.510561 1043921 system_pods.go:89] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:41.510600 1043921 system_pods.go:89] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:41.510625 1043921 system_pods.go:89] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:41.510644 1043921 system_pods.go:89] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:41.510683 1043921 system_pods.go:89] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:57:41.510707 1043921 system_pods.go:89] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:41.510729 1043921 system_pods.go:89] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:41.510772 1043921 system_pods.go:89] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:57:41.510798 1043921 system_pods.go:89] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 07:57:41.510820 1043921 system_pods.go:89] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:41.510853 1043921 system_pods.go:89] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:57:41.510877 1043921 system_pods.go:89] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:57:41.510898 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:41.510933 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:41.510972 1043921 system_pods.go:89] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 07:57:41.511018 1043921 retry.go:31] will retry after 725.611233ms: missing components: kube-dns
	I1123 07:57:41.693152 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:41.794587 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:42.005316 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:42.008482 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:42.194280 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:42.296605 1043921 system_pods.go:86] 19 kube-system pods found
	I1123 07:57:42.296699 1043921 system_pods.go:89] "coredns-66bc5c9577-d9vmc" [554db792-666e-408c-8ae1-52bf3fe32b9a] Running
	I1123 07:57:42.296727 1043921 system_pods.go:89] "csi-hostpath-attacher-0" [f979198a-fe36-4dc2-8a71-4c41af723eae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 07:57:42.296772 1043921 system_pods.go:89] "csi-hostpath-resizer-0" [95093cda-ec14-4f1e-ba9b-d696e2511286] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 07:57:42.296801 1043921 system_pods.go:89] "csi-hostpathplugin-8j7r2" [2de4707d-c64f-4ebf-9dd2-69abf0bd6418] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 07:57:42.296820 1043921 system_pods.go:89] "etcd-addons-782760" [dd4b8ebd-25d5-4754-a325-714c8496c618] Running
	I1123 07:57:42.296855 1043921 system_pods.go:89] "kindnet-qrqlv" [754150a4-5e3c-477e-96ac-67e2e8438826] Running
	I1123 07:57:42.296880 1043921 system_pods.go:89] "kube-apiserver-addons-782760" [826caeeb-44b7-449f-a5c2-4a32568deb97] Running
	I1123 07:57:42.296901 1043921 system_pods.go:89] "kube-controller-manager-addons-782760" [6e1ae611-5937-435a-aefa-2f94b36d08e0] Running
	I1123 07:57:42.296939 1043921 system_pods.go:89] "kube-ingress-dns-minikube" [9e0f12dd-7a60-47c9-89d9-feade94785dd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:57:42.296961 1043921 system_pods.go:89] "kube-proxy-jv2pd" [6c3bfa28-8f74-4b7d-9c44-ecdf225e77dd] Running
	I1123 07:57:42.296981 1043921 system_pods.go:89] "kube-scheduler-addons-782760" [b0d963fa-dc46-4b9c-880e-8d94d6872c1f] Running
	I1123 07:57:42.297022 1043921 system_pods.go:89] "metrics-server-85b7d694d7-l4cfr" [784e1f40-e163-423b-b2c4-7f3e9306070b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:57:42.297054 1043921 system_pods.go:89] "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 07:57:42.297001 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:42.297103 1043921 system_pods.go:89] "registry-6b586f9694-rblw8" [a69c6c76-cea7-4b78-b388-24fa7110f257] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:57:42.297138 1043921 system_pods.go:89] "registry-creds-764b6fb674-5m8ft" [6908fc1b-d56b-4159-bae1-3a2c7f324b9e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:57:42.297153 1043921 system_pods.go:89] "registry-proxy-crmkh" [db5947b1-31f5-4ab2-93fe-b0cb4359b4eb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:57:42.297164 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4rwkm" [76531992-e9a2-42a3-8325-63265f73ce98] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:42.297175 1043921 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wqcnm" [7516bc6b-a724-4ecf-96e4-82ed81ef59f8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:57:42.297183 1043921 system_pods.go:89] "storage-provisioner" [15857a38-d245-473f-83fd-6096457f6f64] Running
	I1123 07:57:42.297193 1043921 system_pods.go:126] duration metric: took 2.392982357s to wait for k8s-apps to be running ...
	I1123 07:57:42.297223 1043921 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 07:57:42.297304 1043921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 07:57:42.313635 1043921 system_svc.go:56] duration metric: took 16.40013ms WaitForService to wait for kubelet
	I1123 07:57:42.313737 1043921 kubeadm.go:587] duration metric: took 43.270002452s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 07:57:42.313770 1043921 node_conditions.go:102] verifying NodePressure condition ...
	I1123 07:57:42.317004 1043921 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 07:57:42.317107 1043921 node_conditions.go:123] node cpu capacity is 2
	I1123 07:57:42.317139 1043921 node_conditions.go:105] duration metric: took 3.336832ms to run NodePressure ...
	I1123 07:57:42.317178 1043921 start.go:242] waiting for startup goroutines ...
	I1123 07:57:42.506487 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:42.506699 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:42.693266 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:42.766335 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:43.004337 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:43.007076 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:43.192171 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:43.266212 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:43.504063 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:43.505887 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:43.693684 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:43.794480 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:44.004638 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:44.005796 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:44.193494 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:44.267596 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:44.505688 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:44.505838 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:44.692653 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:44.767129 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:45.006820 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:45.009809 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:45.207981 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:45.270300 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:45.505325 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:45.505598 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:45.692758 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:45.766903 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:46.007564 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:46.008095 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:46.192839 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:46.266861 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:46.503711 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:46.504730 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:46.692678 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:46.767545 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:47.007408 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:47.007927 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:47.192967 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:47.265960 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:47.503274 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:47.505445 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:47.692946 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:47.793607 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:48.003821 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:48.006403 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:48.192416 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:48.266217 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:48.503793 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:48.505170 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:48.693582 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:48.766883 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:49.006458 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:49.008477 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:49.192702 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:49.266931 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:49.504084 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:49.505852 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:49.693847 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:49.794270 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:50.004719 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:50.016117 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:50.193402 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:50.266564 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:50.503674 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:50.505883 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:50.692756 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:50.766934 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:51.006749 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:51.006923 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:51.192979 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:51.270915 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:51.511633 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:51.512292 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:51.693613 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:51.795747 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:52.008771 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:52.009257 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:52.192948 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:52.266836 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:52.508743 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:52.509322 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:52.693954 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:52.769788 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:53.007734 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:53.007985 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:53.194600 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:53.295429 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:53.507502 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:53.508074 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:53.705484 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:53.787380 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:54.008236 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:54.008539 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:54.192634 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:54.266910 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:54.503538 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:54.505158 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:54.692427 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:54.776786 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:55.006055 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:55.012177 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:55.193425 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:55.266816 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:55.504533 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:55.507309 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:55.700234 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:55.767450 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:56.008054 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:56.008785 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:56.193583 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:56.266505 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:56.504800 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:56.504944 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:56.692869 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:56.767132 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:57.006802 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:57.006996 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:57.192287 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:57.266738 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:57.503501 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:57.505625 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:57.692613 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:57.767240 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:58.004708 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:58.008130 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:58.192930 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:58.266627 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:58.506222 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:58.506332 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:58.692530 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:58.767089 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:59.006579 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:59.009226 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:59.193005 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:59.266747 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:59.504843 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:59.505175 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:59.692534 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:59.767207 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:00.011958 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:00.032117 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:00.228389 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:00.281522 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:00.505899 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:00.506511 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:00.693439 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:00.767004 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:01.007027 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:01.008495 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:01.193122 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:01.267411 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:01.506231 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:01.508534 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:01.693491 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:01.768664 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:02.006370 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:02.008220 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:02.195407 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:02.267380 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:02.506478 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:02.506744 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:02.693372 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:02.766939 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:03.016414 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:03.023070 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:03.193829 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:03.295361 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:03.509233 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:03.509794 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:03.693836 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:03.766255 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:04.006571 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:04.007051 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:04.193400 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:04.266822 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:04.505579 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:04.507416 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:04.693185 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:04.766613 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:05.005762 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:05.008243 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:05.194037 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:05.266614 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:05.503758 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:05.505319 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:05.692337 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:05.766301 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:06.003406 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:06.007636 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:06.192968 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:06.266503 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:06.504864 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:06.505231 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:06.693689 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:06.766692 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:07.005469 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:07.007262 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:07.192727 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:07.267078 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:07.505479 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:07.506346 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:07.692449 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:07.766462 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:08.007721 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:08.007885 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:08.204862 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:08.265875 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:08.503932 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:08.506179 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:08.693110 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:08.767128 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:09.004316 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:09.007931 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:09.193532 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:09.267079 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:09.505825 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:09.506387 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:09.692696 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:09.767045 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:10.005477 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:10.007638 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:10.193955 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:10.267357 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:10.504838 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:10.507047 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:10.693231 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:10.766720 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:11.005961 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:11.007585 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:11.193189 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:11.267274 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:11.504759 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:11.505953 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:11.694265 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:11.767264 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:12.004826 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:12.008935 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:12.192923 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:12.266429 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:12.504133 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:12.512396 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:12.692438 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:12.766310 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:13.005240 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:13.006847 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:13.193237 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:13.266531 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:13.506188 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:13.506534 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:13.692776 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:13.766945 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:14.004941 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:14.006997 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:14.193274 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:14.266570 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:14.503423 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:14.505448 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:14.692580 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:14.766357 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:15.008405 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:15.008956 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:15.193413 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:15.266582 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:15.503947 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:15.505317 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:58:15.693120 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:15.766432 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:16.004420 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:16.017231 1043921 kapi.go:107] duration metric: took 1m11.015389153s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 07:58:16.192842 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:16.266907 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:16.503561 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:16.692831 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:16.766534 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:17.004724 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:17.193491 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:17.267095 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:17.503472 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:17.692353 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:17.766506 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:18.007834 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:18.192613 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:18.267348 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:18.503889 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:18.693000 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:18.773862 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:19.005588 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:19.193728 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:19.267526 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:19.504378 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:19.692636 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:19.767124 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:20.005069 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:20.193318 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:20.266663 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:20.503977 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:20.692810 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:20.766966 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:21.003379 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:21.193256 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:21.266972 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:21.503104 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:21.692530 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:21.766374 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:22.006986 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:22.193261 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:22.266698 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:22.504883 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:22.692861 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:22.765906 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:23.003132 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:23.192827 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:23.265582 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:23.504939 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:23.694242 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:23.766197 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:24.004431 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:24.193163 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:24.268575 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:24.504602 1043921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:58:24.692881 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:24.766707 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:25.005405 1043921 kapi.go:107] duration metric: took 1m20.005430467s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 07:58:25.193448 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:25.268918 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:25.776010 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:25.776760 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:26.193022 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:26.266299 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:26.692285 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:26.766465 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:27.197813 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:27.267545 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:27.692954 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:27.767247 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:28.191946 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:28.275166 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:28.693761 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:28.775843 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:29.194368 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:29.267214 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:29.693315 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:29.767779 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:30.195975 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:30.268765 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:30.692535 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:30.766389 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:31.192830 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:31.265840 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:31.692971 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:31.766921 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:32.198486 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:32.293264 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:32.692362 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:32.766086 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:33.197505 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:33.266999 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:33.692058 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:33.766241 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:34.192804 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:34.266538 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:34.692130 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:34.766678 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:58:35.193571 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:35.266569 1043921 kapi.go:107] duration metric: took 1m29.503694816s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 07:58:35.692690 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:36.193234 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:36.692031 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:37.192808 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:37.693451 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:38.193969 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:38.692488 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:39.192945 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:39.692547 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:40.193175 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:40.693765 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:41.193075 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:41.695212 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:42.193150 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:42.692465 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:43.193262 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:43.693316 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:44.192690 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:44.692991 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:45.196723 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:45.693258 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:46.192524 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:46.692514 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:47.192969 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:47.692439 1043921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:58:48.193597 1043921 kapi.go:107] duration metric: took 1m39.504270078s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 07:58:48.196576 1043921 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-782760 cluster.
	I1123 07:58:48.199418 1043921 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 07:58:48.202300 1043921 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 07:58:48.205000 1043921 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, amd-gpu-device-plugin, inspektor-gadget, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1123 07:58:48.207771 1043921 addons.go:530] duration metric: took 1m49.163731725s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner amd-gpu-device-plugin inspektor-gadget registry-creds metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1123 07:58:48.207823 1043921 start.go:247] waiting for cluster config update ...
	I1123 07:58:48.207863 1043921 start.go:256] writing updated cluster config ...
	I1123 07:58:48.208202 1043921 ssh_runner.go:195] Run: rm -f paused
	I1123 07:58:48.213178 1043921 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 07:58:48.294503 1043921 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d9vmc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.300525 1043921 pod_ready.go:94] pod "coredns-66bc5c9577-d9vmc" is "Ready"
	I1123 07:58:48.300560 1043921 pod_ready.go:86] duration metric: took 6.026831ms for pod "coredns-66bc5c9577-d9vmc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.303232 1043921 pod_ready.go:83] waiting for pod "etcd-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.309523 1043921 pod_ready.go:94] pod "etcd-addons-782760" is "Ready"
	I1123 07:58:48.309549 1043921 pod_ready.go:86] duration metric: took 6.293818ms for pod "etcd-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.312061 1043921 pod_ready.go:83] waiting for pod "kube-apiserver-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.316408 1043921 pod_ready.go:94] pod "kube-apiserver-addons-782760" is "Ready"
	I1123 07:58:48.316434 1043921 pod_ready.go:86] duration metric: took 4.347445ms for pod "kube-apiserver-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.318735 1043921 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.617398 1043921 pod_ready.go:94] pod "kube-controller-manager-addons-782760" is "Ready"
	I1123 07:58:48.617424 1043921 pod_ready.go:86] duration metric: took 298.66452ms for pod "kube-controller-manager-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:48.817704 1043921 pod_ready.go:83] waiting for pod "kube-proxy-jv2pd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:49.217308 1043921 pod_ready.go:94] pod "kube-proxy-jv2pd" is "Ready"
	I1123 07:58:49.217337 1043921 pod_ready.go:86] duration metric: took 399.60579ms for pod "kube-proxy-jv2pd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:49.418163 1043921 pod_ready.go:83] waiting for pod "kube-scheduler-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:49.817143 1043921 pod_ready.go:94] pod "kube-scheduler-addons-782760" is "Ready"
	I1123 07:58:49.817174 1043921 pod_ready.go:86] duration metric: took 398.98294ms for pod "kube-scheduler-addons-782760" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:58:49.817187 1043921 pod_ready.go:40] duration metric: took 1.603976757s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 07:58:49.874412 1043921 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 07:58:49.877676 1043921 out.go:179] * Done! kubectl is now configured to use "addons-782760" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 07:59:17 addons-782760 crio[830]: time="2025-11-23T07:59:17.427329579Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 07:59:17 addons-782760 crio[830]: time="2025-11-23T07:59:17.427821323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 07:59:17 addons-782760 crio[830]: time="2025-11-23T07:59:17.443738973Z" level=info msg="Created container 5a3fa67b1b8764c2aa2459c8e6dbbed23f97cf1772685ea7a498c4e01181fcfc: default/test-local-path/busybox" id=e9f793b4-43f9-4080-abc6-515e7cac20a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 07:59:17 addons-782760 crio[830]: time="2025-11-23T07:59:17.44473613Z" level=info msg="Starting container: 5a3fa67b1b8764c2aa2459c8e6dbbed23f97cf1772685ea7a498c4e01181fcfc" id=b2ef4cb3-e9a3-46e3-8eec-58813ab0a9c0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 07:59:17 addons-782760 crio[830]: time="2025-11-23T07:59:17.449279672Z" level=info msg="Started container" PID=5305 containerID=5a3fa67b1b8764c2aa2459c8e6dbbed23f97cf1772685ea7a498c4e01181fcfc description=default/test-local-path/busybox id=b2ef4cb3-e9a3-46e3-8eec-58813ab0a9c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f4ec248f274daa476bb63422c3b43a6756f5058f3b044cb5e71b669bba148394
	Nov 23 07:59:19 addons-782760 crio[830]: time="2025-11-23T07:59:19.207982473Z" level=info msg="Stopping pod sandbox: f4ec248f274daa476bb63422c3b43a6756f5058f3b044cb5e71b669bba148394" id=735ffaae-9ae2-4afc-af1f-f11fa3f13d49 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 07:59:19 addons-782760 crio[830]: time="2025-11-23T07:59:19.208293898Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:f4ec248f274daa476bb63422c3b43a6756f5058f3b044cb5e71b669bba148394 UID:44bf05ce-a8c1-4035-bb77-19931d7edbb5 NetNS:/var/run/netns/292eaa35-bb17-4452-ba3e-a3a219f1908b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012ac520}] Aliases:map[]}"
	Nov 23 07:59:19 addons-782760 crio[830]: time="2025-11-23T07:59:19.208460745Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Nov 23 07:59:19 addons-782760 crio[830]: time="2025-11-23T07:59:19.233213678Z" level=info msg="Stopped pod sandbox: f4ec248f274daa476bb63422c3b43a6756f5058f3b044cb5e71b669bba148394" id=735ffaae-9ae2-4afc-af1f-f11fa3f13d49 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.536919453Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723/POD" id=6db69f70-6a1c-4106-8b5d-ed1f5e3ac728 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.536999533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.55897648Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723 Namespace:local-path-storage ID:7bd4f678df5d9cf076b645f18b2926161fa1f22061741b1e0d2d4b3062487b7d UID:9316bc5a-9f08-4878-b7ae-1dcd103b19d4 NetNS:/var/run/netns/3d04fb4a-f7d4-4cdb-8fe3-b3ff7f5816c9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012ac970}] Aliases:map[]}"
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.559018661Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723 to CNI network \"kindnet\" (type=ptp)"
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.573202199Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723 Namespace:local-path-storage ID:7bd4f678df5d9cf076b645f18b2926161fa1f22061741b1e0d2d4b3062487b7d UID:9316bc5a-9f08-4878-b7ae-1dcd103b19d4 NetNS:/var/run/netns/3d04fb4a-f7d4-4cdb-8fe3-b3ff7f5816c9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012ac970}] Aliases:map[]}"
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.57338694Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723 for CNI network kindnet (type=ptp)"
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.583166541Z" level=info msg="Ran pod sandbox 7bd4f678df5d9cf076b645f18b2926161fa1f22061741b1e0d2d4b3062487b7d with infra container: local-path-storage/helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723/POD" id=6db69f70-6a1c-4106-8b5d-ed1f5e3ac728 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.58462474Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=41372cdd-ba0f-495f-9ff3-dd6d4558a7dd name=/runtime.v1.ImageService/ImageStatus
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.588827122Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=940ee2d1-159e-461d-b802-24d02e1d8fac name=/runtime.v1.ImageService/ImageStatus
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.599749534Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723/helper-pod" id=397c09e1-03e6-4b98-8a8b-8ae9f1be5c24 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.599906059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.609011817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.609535757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.636896205Z" level=info msg="Created container 17b42d7b9f5359ce82db563ff0aa1da0f33be5437b565bc74b9410cceb267a11: local-path-storage/helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723/helper-pod" id=397c09e1-03e6-4b98-8a8b-8ae9f1be5c24 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.640362371Z" level=info msg="Starting container: 17b42d7b9f5359ce82db563ff0aa1da0f33be5437b565bc74b9410cceb267a11" id=e6840d9c-d867-41c5-9830-1b1ab4ebc720 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 07:59:20 addons-782760 crio[830]: time="2025-11-23T07:59:20.647882673Z" level=info msg="Started container" PID=5397 containerID=17b42d7b9f5359ce82db563ff0aa1da0f33be5437b565bc74b9410cceb267a11 description=local-path-storage/helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723/helper-pod id=e6840d9c-d867-41c5-9830-1b1ab4ebc720 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7bd4f678df5d9cf076b645f18b2926161fa1f22061741b1e0d2d4b3062487b7d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	17b42d7b9f535       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   7bd4f678df5d9       helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723   local-path-storage
	5a3fa67b1b876       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   f4ec248f274da       test-local-path                                              default
	63c8ba97c20c8       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   5b8e5b6e44ba7       helper-pod-create-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723   local-path-storage
	809f44d2aead6       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          8 seconds ago        Exited              registry-test                            0                   ef494bb406b80       registry-test                                                default
	3db8fe62cb746       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          29 seconds ago       Running             busybox                                  0                   5bb3d71b77784       busybox                                                      default
	e6b397fa20b7b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 34 seconds ago       Running             gcp-auth                                 0                   83040823fe3e0       gcp-auth-78565c9fb4-ntzsg                                    gcp-auth
	b7dbc42af3eaa       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          47 seconds ago       Running             csi-snapshotter                          0                   78aa636a62e32       csi-hostpathplugin-8j7r2                                     kube-system
	e81b53e67dd69       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          48 seconds ago       Running             csi-provisioner                          0                   78aa636a62e32       csi-hostpathplugin-8j7r2                                     kube-system
	25c0aa23665db       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            50 seconds ago       Running             liveness-probe                           0                   78aa636a62e32       csi-hostpathplugin-8j7r2                                     kube-system
	654a0f71268c2       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           51 seconds ago       Running             hostpath                                 0                   78aa636a62e32       csi-hostpathplugin-8j7r2                                     kube-system
	05fe963f89f66       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                52 seconds ago       Running             node-driver-registrar                    0                   78aa636a62e32       csi-hostpathplugin-8j7r2                                     kube-system
	96dcbe281d6f8       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            53 seconds ago       Running             gadget                                   0                   852a4ec0a6e40       gadget-pqgzc                                                 gadget
	0aa44e320a18e       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             58 seconds ago       Running             controller                               0                   bce481b6afdec       ingress-nginx-controller-6c8bf45fb-7jxcp                     ingress-nginx
	ff09ce175fe75       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   c78167bd447eb       csi-hostpath-attacher-0                                      kube-system
	7ca479867b243       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   78aa636a62e32       csi-hostpathplugin-8j7r2                                     kube-system
	35dd0f9bcb50a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   2c16a8b01a95f       registry-proxy-crmkh                                         kube-system
	90e12086b17a9       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   aba90cf2826d7       nvidia-device-plugin-daemonset-stqrq                         kube-system
	410c2359fb0c0       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   a054f5920c449       snapshot-controller-7d9fbc56b8-wqcnm                         kube-system
	774655bd891a2       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   602d4f62989cc       yakd-dashboard-5ff678cb9-6j7jv                               yakd-dashboard
	9311aa036bd97       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   5480a41940775       kube-ingress-dns-minikube                                    kube-system
	1d4e31902581e       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   9e63ad95dec7f       registry-6b586f9694-rblw8                                    kube-system
	9734ce796f3ef       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   4bff8ec0f7b4c       csi-hostpath-resizer-0                                       kube-system
	8d569fddb15d1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              patch                                    0                   5992efcdc388f       ingress-nginx-admission-patch-g4ft4                          ingress-nginx
	fb98b04224a9c       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   172424698e105       metrics-server-85b7d694d7-l4cfr                              kube-system
	e99c3d22230ac       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   43512c09e295d       ingress-nginx-admission-create-c8dgn                         ingress-nginx
	8e887d5a1cac1       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   0ecd11a14c527       local-path-provisioner-648f6765c9-7zdjv                      local-path-storage
	d2ffd09041ccf       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   b42228ca86d71       snapshot-controller-7d9fbc56b8-4rwkm                         kube-system
	ea6b982ceb37f       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   d1a586cc7c66f       cloud-spanner-emulator-5bdddb765-wn4d8                       default
	685798fa38932       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   dae52078c9fdc       storage-provisioner                                          kube-system
	01a96c05c2e23       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   306c096ab40e7       coredns-66bc5c9577-d9vmc                                     kube-system
	995c0ad221a0e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   c75d75de3c2c8       kube-proxy-jv2pd                                             kube-system
	d3d5fbc406391       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   6c9e098586e6d       kindnet-qrqlv                                                kube-system
	03fd92afca30f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   de7486f71e017       etcd-addons-782760                                           kube-system
	7b54407c8a503       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   eab14dfb03869       kube-scheduler-addons-782760                                 kube-system
	4952e333e5cbc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   d46c7e1746d6f       kube-controller-manager-addons-782760                        kube-system
	1e9a39b963c81       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   1ce9f5174506d       kube-apiserver-addons-782760                                 kube-system
	
	
	==> coredns [01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486] <==
	[INFO] 10.244.0.15:56386 - 14745 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003411734s
	[INFO] 10.244.0.15:56386 - 57024 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00018397s
	[INFO] 10.244.0.15:56386 - 43933 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000121031s
	[INFO] 10.244.0.15:43531 - 51117 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000190058s
	[INFO] 10.244.0.15:43531 - 50854 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149066s
	[INFO] 10.244.0.15:49798 - 46275 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113072s
	[INFO] 10.244.0.15:49798 - 46513 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091033s
	[INFO] 10.244.0.15:57131 - 10808 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012304s
	[INFO] 10.244.0.15:57131 - 10529 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090606s
	[INFO] 10.244.0.15:42837 - 61578 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001699414s
	[INFO] 10.244.0.15:42837 - 61825 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001564624s
	[INFO] 10.244.0.15:52120 - 57605 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000174551s
	[INFO] 10.244.0.15:52120 - 57392 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000294441s
	[INFO] 10.244.0.21:37004 - 63790 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017803s
	[INFO] 10.244.0.21:40712 - 33026 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000457603s
	[INFO] 10.244.0.21:33073 - 42541 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143898s
	[INFO] 10.244.0.21:48780 - 21993 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000101092s
	[INFO] 10.244.0.21:52179 - 41164 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106803s
	[INFO] 10.244.0.21:41575 - 21852 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090344s
	[INFO] 10.244.0.21:41629 - 61232 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002064918s
	[INFO] 10.244.0.21:37794 - 19102 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001709506s
	[INFO] 10.244.0.21:42252 - 65258 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000751569s
	[INFO] 10.244.0.21:47660 - 18640 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003215325s
	[INFO] 10.244.0.23:36263 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000157772s
	[INFO] 10.244.0.23:54909 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157075s
	
	
	==> describe nodes <==
	Name:               addons-782760
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-782760
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=addons-782760
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T07_56_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-782760
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-782760"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 07:56:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-782760
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 07:59:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 07:58:56 +0000   Sun, 23 Nov 2025 07:56:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 07:58:56 +0000   Sun, 23 Nov 2025 07:56:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 07:58:56 +0000   Sun, 23 Nov 2025 07:56:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 07:58:56 +0000   Sun, 23 Nov 2025 07:57:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-782760
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                e4972c17-bf29-4288-839a-93a0193f5931
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  default                     cloud-spanner-emulator-5bdddb765-wn4d8      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-pqgzc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  gcp-auth                    gcp-auth-78565c9fb4-ntzsg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-7jxcp    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m18s
	  kube-system                 coredns-66bc5c9577-d9vmc                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m24s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 csi-hostpathplugin-8j7r2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 etcd-addons-782760                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m29s
	  kube-system                 kindnet-qrqlv                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-addons-782760                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-controller-manager-addons-782760       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-proxy-jv2pd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-addons-782760                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 metrics-server-85b7d694d7-l4cfr             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m18s
	  kube-system                 nvidia-device-plugin-daemonset-stqrq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 registry-6b586f9694-rblw8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 registry-creds-764b6fb674-5m8ft             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 registry-proxy-crmkh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 snapshot-controller-7d9fbc56b8-4rwkm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 snapshot-controller-7d9fbc56b8-wqcnm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  local-path-storage          local-path-provisioner-648f6765c9-7zdjv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-6j7jv              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m22s  kube-proxy       
	  Normal   Starting                 2m29s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m29s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m29s  kubelet          Node addons-782760 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m29s  kubelet          Node addons-782760 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m29s  kubelet          Node addons-782760 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m25s  node-controller  Node addons-782760 event: Registered Node addons-782760 in Controller
	  Normal   NodeReady                103s   kubelet          Node addons-782760 status is now: NodeReady
	
	
	==> dmesg <==
	[ +30.904426] overlayfs: idmapped layers are currently not supported
	[Nov23 07:10] overlayfs: idmapped layers are currently not supported
	[Nov23 07:12] overlayfs: idmapped layers are currently not supported
	[Nov23 07:13] overlayfs: idmapped layers are currently not supported
	[Nov23 07:14] overlayfs: idmapped layers are currently not supported
	[ +16.709544] overlayfs: idmapped layers are currently not supported
	[ +39.052436] overlayfs: idmapped layers are currently not supported
	[Nov23 07:16] overlayfs: idmapped layers are currently not supported
	[Nov23 07:17] overlayfs: idmapped layers are currently not supported
	[Nov23 07:18] overlayfs: idmapped layers are currently not supported
	[ +42.777291] overlayfs: idmapped layers are currently not supported
	[Nov23 07:19] overlayfs: idmapped layers are currently not supported
	[Nov23 07:20] overlayfs: idmapped layers are currently not supported
	[Nov23 07:21] overlayfs: idmapped layers are currently not supported
	[ +25.538176] overlayfs: idmapped layers are currently not supported
	[Nov23 07:22] overlayfs: idmapped layers are currently not supported
	[ +17.484475] overlayfs: idmapped layers are currently not supported
	[Nov23 07:23] overlayfs: idmapped layers are currently not supported
	[Nov23 07:24] overlayfs: idmapped layers are currently not supported
	[Nov23 07:25] overlayfs: idmapped layers are currently not supported
	[Nov23 07:26] overlayfs: idmapped layers are currently not supported
	[Nov23 07:27] overlayfs: idmapped layers are currently not supported
	[ +38.121959] overlayfs: idmapped layers are currently not supported
	[Nov23 07:55] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 07:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8] <==
	{"level":"warn","ts":"2025-11-23T07:56:49.367891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.383087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.408523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.439739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.463751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.484391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.516667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.519244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.552798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.574300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.582074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.598288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.620015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.653056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.662612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.697865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.721015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.730790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:49.840303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:57:05.720418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:57:05.743929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:57:27.719377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:57:27.734312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:57:27.771152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:57:27.779450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35708","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [e6b397fa20b7b42f76e17ad2ed2e50d2ded0d57757201cd0fcc2d4d1aa701e3a] <==
	2025/11/23 07:58:47 GCP Auth Webhook started!
	2025/11/23 07:58:50 Ready to marshal response ...
	2025/11/23 07:58:50 Ready to write response ...
	2025/11/23 07:58:50 Ready to marshal response ...
	2025/11/23 07:58:50 Ready to write response ...
	2025/11/23 07:58:50 Ready to marshal response ...
	2025/11/23 07:58:50 Ready to write response ...
	2025/11/23 07:59:10 Ready to marshal response ...
	2025/11/23 07:59:10 Ready to write response ...
	2025/11/23 07:59:12 Ready to marshal response ...
	2025/11/23 07:59:12 Ready to write response ...
	2025/11/23 07:59:12 Ready to marshal response ...
	2025/11/23 07:59:12 Ready to write response ...
	2025/11/23 07:59:20 Ready to marshal response ...
	2025/11/23 07:59:20 Ready to write response ...
	
	
	==> kernel <==
	 07:59:22 up  8:41,  0 user,  load average: 2.53, 1.32, 0.96
	Linux addons-782760 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263] <==
	I1123 07:57:31.114162       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 07:57:31.114194       1 metrics.go:72] Registering metrics
	I1123 07:57:31.114275       1 controller.go:711] "Syncing nftables rules"
	I1123 07:57:39.608899       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:57:39.608953       1 main.go:301] handling current node
	I1123 07:57:49.608371       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:57:49.608433       1 main.go:301] handling current node
	I1123 07:57:59.610225       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:57:59.610253       1 main.go:301] handling current node
	I1123 07:58:09.609263       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:58:09.609299       1 main.go:301] handling current node
	I1123 07:58:19.609176       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:58:19.609205       1 main.go:301] handling current node
	I1123 07:58:29.610213       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:58:29.610276       1 main.go:301] handling current node
	I1123 07:58:39.608272       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:58:39.608310       1 main.go:301] handling current node
	I1123 07:58:49.609408       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:58:49.609444       1 main.go:301] handling current node
	I1123 07:58:59.609340       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:58:59.609398       1 main.go:301] handling current node
	I1123 07:59:09.616257       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:59:09.616293       1 main.go:301] handling current node
	I1123 07:59:19.608400       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:59:19.608440       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3] <==
	W1123 07:57:05.719528       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 07:57:05.736332       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1123 07:57:08.566965       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.111.168.210"}
	W1123 07:57:27.718985       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 07:57:27.734128       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 07:57:27.760968       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1123 07:57:27.779089       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 07:57:39.731435       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.168.210:443: connect: connection refused
	E1123 07:57:39.731534       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.168.210:443: connect: connection refused" logger="UnhandledError"
	W1123 07:57:39.731995       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.168.210:443: connect: connection refused
	E1123 07:57:39.732078       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.168.210:443: connect: connection refused" logger="UnhandledError"
	W1123 07:57:39.825320       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.168.210:443: connect: connection refused
	E1123 07:57:39.825363       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.168.210:443: connect: connection refused" logger="UnhandledError"
	W1123 07:57:53.720693       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 07:57:53.721085       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 07:57:53.720995       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.124.72:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.124.72:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.124.72:443: connect: connection refused" logger="UnhandledError"
	E1123 07:57:53.723596       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.124.72:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.124.72:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.124.72:443: connect: connection refused" logger="UnhandledError"
	E1123 07:57:53.729421       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.124.72:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.124.72:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.124.72:443: connect: connection refused" logger="UnhandledError"
	I1123 07:57:53.908608       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1123 07:58:59.769556       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42130: use of closed network connection
	E1123 07:58:59.990994       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42158: use of closed network connection
	E1123 07:59:00.326038       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42166: use of closed network connection
	
	
	==> kube-controller-manager [4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3] <==
	I1123 07:56:57.744631       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 07:56:57.745708       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 07:56:57.745724       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 07:56:57.746910       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 07:56:57.747084       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 07:56:57.747214       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 07:56:57.748324       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 07:56:57.749610       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 07:56:57.750465       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 07:56:57.752932       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 07:56:57.752951       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 07:56:57.753040       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 07:56:57.753085       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 07:56:57.753113       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 07:56:57.753142       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 07:56:57.762219       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-782760" podCIDRs=["10.244.0.0/24"]
	E1123 07:57:04.231863       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1123 07:57:27.712138       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 07:57:27.712300       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1123 07:57:27.712365       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 07:57:27.742053       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1123 07:57:27.750509       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 07:57:27.812754       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 07:57:27.851533       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 07:57:42.709683       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45] <==
	I1123 07:56:59.924406       1 server_linux.go:53] "Using iptables proxy"
	I1123 07:57:00.037896       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 07:57:00.142186       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 07:57:00.142250       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 07:57:00.142350       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 07:57:00.358112       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 07:57:00.358190       1 server_linux.go:132] "Using iptables Proxier"
	I1123 07:57:00.371626       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 07:57:00.372025       1 server.go:527] "Version info" version="v1.34.1"
	I1123 07:57:00.372045       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 07:57:00.389924       1 config.go:106] "Starting endpoint slice config controller"
	I1123 07:57:00.389957       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 07:57:00.390338       1 config.go:200] "Starting service config controller"
	I1123 07:57:00.390346       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 07:57:00.390683       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 07:57:00.390691       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 07:57:00.394041       1 config.go:309] "Starting node config controller"
	I1123 07:57:00.394139       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 07:57:00.394163       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 07:57:00.490450       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 07:57:00.490526       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 07:57:00.491432       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a] <==
	I1123 07:56:51.215967       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 07:56:51.216037       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 07:56:51.216373       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 07:56:51.216428       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 07:56:51.224299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 07:56:51.226728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 07:56:51.226911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 07:56:51.227012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 07:56:51.227228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 07:56:51.227357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 07:56:51.227460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 07:56:51.229708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 07:56:51.229832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 07:56:51.229917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 07:56:51.230149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 07:56:51.230259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 07:56:51.230357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 07:56:51.230452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 07:56:51.230539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 07:56:51.230645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 07:56:51.230796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 07:56:51.230890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 07:56:51.231034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 07:56:52.210210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 07:56:54.916158       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 07:59:19 addons-782760 kubelet[1275]: I1123 07:59:19.306528    1275 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bf05ce-a8c1-4035-bb77-19931d7edbb5-kube-api-access-8dggk" (OuterVolumeSpecName: "kube-api-access-8dggk") pod "44bf05ce-a8c1-4035-bb77-19931d7edbb5" (UID: "44bf05ce-a8c1-4035-bb77-19931d7edbb5"). InnerVolumeSpecName "kube-api-access-8dggk". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 23 07:59:19 addons-782760 kubelet[1275]: I1123 07:59:19.400966    1275 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8dggk\" (UniqueName: \"kubernetes.io/projected/44bf05ce-a8c1-4035-bb77-19931d7edbb5-kube-api-access-8dggk\") on node \"addons-782760\" DevicePath \"\""
	Nov 23 07:59:19 addons-782760 kubelet[1275]: I1123 07:59:19.401004    1275 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/44bf05ce-a8c1-4035-bb77-19931d7edbb5-gcp-creds\") on node \"addons-782760\" DevicePath \"\""
	Nov 23 07:59:20 addons-782760 kubelet[1275]: I1123 07:59:20.213983    1275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4ec248f274daa476bb63422c3b43a6756f5058f3b044cb5e71b669bba148394"
	Nov 23 07:59:20 addons-782760 kubelet[1275]: E1123 07:59:20.217119    1275 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-782760\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-782760' and this object" podUID="44bf05ce-a8c1-4035-bb77-19931d7edbb5" pod="default/test-local-path"
	Nov 23 07:59:20 addons-782760 kubelet[1275]: E1123 07:59:20.248971    1275 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-782760\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-782760' and this object" podUID="44bf05ce-a8c1-4035-bb77-19931d7edbb5" pod="default/test-local-path"
	Nov 23 07:59:20 addons-782760 kubelet[1275]: I1123 07:59:20.309473    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-data\") pod \"helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723\" (UID: \"9316bc5a-9f08-4878-b7ae-1dcd103b19d4\") " pod="local-path-storage/helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723"
	Nov 23 07:59:20 addons-782760 kubelet[1275]: I1123 07:59:20.310112    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-script\") pod \"helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723\" (UID: \"9316bc5a-9f08-4878-b7ae-1dcd103b19d4\") " pod="local-path-storage/helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723"
	Nov 23 07:59:20 addons-782760 kubelet[1275]: I1123 07:59:20.310241    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgvfj\" (UniqueName: \"kubernetes.io/projected/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-kube-api-access-wgvfj\") pod \"helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723\" (UID: \"9316bc5a-9f08-4878-b7ae-1dcd103b19d4\") " pod="local-path-storage/helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723"
	Nov 23 07:59:20 addons-782760 kubelet[1275]: I1123 07:59:20.310407    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-gcp-creds\") pod \"helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723\" (UID: \"9316bc5a-9f08-4878-b7ae-1dcd103b19d4\") " pod="local-path-storage/helper-pod-delete-pvc-4edddf59-348f-4660-91bb-3a71fe1ac723"
	Nov 23 07:59:20 addons-782760 kubelet[1275]: W1123 07:59:20.580816    1275 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3e0fb2f2cb2c2ca7bc7b036b5b90817ca7c6955044febd5450a96db807d17185/crio-7bd4f678df5d9cf076b645f18b2926161fa1f22061741b1e0d2d4b3062487b7d WatchSource:0}: Error finding container 7bd4f678df5d9cf076b645f18b2926161fa1f22061741b1e0d2d4b3062487b7d: Status 404 returned error can't find the container with id 7bd4f678df5d9cf076b645f18b2926161fa1f22061741b1e0d2d4b3062487b7d
	Nov 23 07:59:21 addons-782760 kubelet[1275]: E1123 07:59:21.246809    1275 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-782760\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-782760' and this object" podUID="44bf05ce-a8c1-4035-bb77-19931d7edbb5" pod="default/test-local-path"
	Nov 23 07:59:21 addons-782760 kubelet[1275]: I1123 07:59:21.300179    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44bf05ce-a8c1-4035-bb77-19931d7edbb5" path="/var/lib/kubelet/pods/44bf05ce-a8c1-4035-bb77-19931d7edbb5/volumes"
	Nov 23 07:59:22 addons-782760 kubelet[1275]: I1123 07:59:22.330718    1275 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-script\") pod \"9316bc5a-9f08-4878-b7ae-1dcd103b19d4\" (UID: \"9316bc5a-9f08-4878-b7ae-1dcd103b19d4\") "
	Nov 23 07:59:22 addons-782760 kubelet[1275]: I1123 07:59:22.330775    1275 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-data\") pod \"9316bc5a-9f08-4878-b7ae-1dcd103b19d4\" (UID: \"9316bc5a-9f08-4878-b7ae-1dcd103b19d4\") "
	Nov 23 07:59:22 addons-782760 kubelet[1275]: I1123 07:59:22.330808    1275 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-gcp-creds\") pod \"9316bc5a-9f08-4878-b7ae-1dcd103b19d4\" (UID: \"9316bc5a-9f08-4878-b7ae-1dcd103b19d4\") "
	Nov 23 07:59:22 addons-782760 kubelet[1275]: I1123 07:59:22.330837    1275 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgvfj\" (UniqueName: \"kubernetes.io/projected/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-kube-api-access-wgvfj\") pod \"9316bc5a-9f08-4878-b7ae-1dcd103b19d4\" (UID: \"9316bc5a-9f08-4878-b7ae-1dcd103b19d4\") "
	Nov 23 07:59:22 addons-782760 kubelet[1275]: I1123 07:59:22.331494    1275 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-script" (OuterVolumeSpecName: "script") pod "9316bc5a-9f08-4878-b7ae-1dcd103b19d4" (UID: "9316bc5a-9f08-4878-b7ae-1dcd103b19d4"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 23 07:59:22 addons-782760 kubelet[1275]: I1123 07:59:22.331539    1275 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-data" (OuterVolumeSpecName: "data") pod "9316bc5a-9f08-4878-b7ae-1dcd103b19d4" (UID: "9316bc5a-9f08-4878-b7ae-1dcd103b19d4"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 23 07:59:22 addons-782760 kubelet[1275]: I1123 07:59:22.331575    1275 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "9316bc5a-9f08-4878-b7ae-1dcd103b19d4" (UID: "9316bc5a-9f08-4878-b7ae-1dcd103b19d4"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 23 07:59:22 addons-782760 kubelet[1275]: I1123 07:59:22.338594    1275 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-kube-api-access-wgvfj" (OuterVolumeSpecName: "kube-api-access-wgvfj") pod "9316bc5a-9f08-4878-b7ae-1dcd103b19d4" (UID: "9316bc5a-9f08-4878-b7ae-1dcd103b19d4"). InnerVolumeSpecName "kube-api-access-wgvfj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 23 07:59:22 addons-782760 kubelet[1275]: I1123 07:59:22.432311    1275 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-script\") on node \"addons-782760\" DevicePath \"\""
	Nov 23 07:59:22 addons-782760 kubelet[1275]: I1123 07:59:22.432351    1275 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-data\") on node \"addons-782760\" DevicePath \"\""
	Nov 23 07:59:22 addons-782760 kubelet[1275]: I1123 07:59:22.432363    1275 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-gcp-creds\") on node \"addons-782760\" DevicePath \"\""
	Nov 23 07:59:22 addons-782760 kubelet[1275]: I1123 07:59:22.432374    1275 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wgvfj\" (UniqueName: \"kubernetes.io/projected/9316bc5a-9f08-4878-b7ae-1dcd103b19d4-kube-api-access-wgvfj\") on node \"addons-782760\" DevicePath \"\""
	
	
	==> storage-provisioner [685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47] <==
	W1123 07:58:57.287302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:58:59.292672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:58:59.297363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:01.301116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:01.305866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:03.308844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:03.313257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:05.316820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:05.320618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:07.324098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:07.329089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:09.332681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:09.336519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:11.339721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:11.344342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:13.346963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:13.353716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:15.357218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:15.365027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:17.368100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:17.375698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:19.378305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:19.382592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:21.386567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:59:21.394400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-782760 -n addons-782760
helpers_test.go:269: (dbg) Run:  kubectl --context addons-782760 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-c8dgn ingress-nginx-admission-patch-g4ft4 registry-creds-764b6fb674-5m8ft
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-782760 describe pod ingress-nginx-admission-create-c8dgn ingress-nginx-admission-patch-g4ft4 registry-creds-764b6fb674-5m8ft
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-782760 describe pod ingress-nginx-admission-create-c8dgn ingress-nginx-admission-patch-g4ft4 registry-creds-764b6fb674-5m8ft: exit status 1 (99.891257ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-c8dgn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-g4ft4" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-5m8ft" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-782760 describe pod ingress-nginx-admission-create-c8dgn ingress-nginx-admission-patch-g4ft4 registry-creds-764b6fb674-5m8ft: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable headlamp --alsologtostderr -v=1: exit status 11 (272.503873ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:59:23.714451 1051188 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:59:23.715802 1051188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:23.715819 1051188 out.go:374] Setting ErrFile to fd 2...
	I1123 07:59:23.715826 1051188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:23.716243 1051188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 07:59:23.716548 1051188 mustload.go:66] Loading cluster: addons-782760
	I1123 07:59:23.716916 1051188 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:23.716927 1051188 addons.go:622] checking whether the cluster is paused
	I1123 07:59:23.717030 1051188 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:23.717040 1051188 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:59:23.717565 1051188 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:59:23.734209 1051188 ssh_runner.go:195] Run: systemctl --version
	I1123 07:59:23.734259 1051188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:59:23.753770 1051188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:59:23.863464 1051188 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:59:23.863594 1051188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:59:23.901696 1051188 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 07:59:23.901730 1051188 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 07:59:23.901735 1051188 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 07:59:23.901739 1051188 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 07:59:23.901766 1051188 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 07:59:23.901777 1051188 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 07:59:23.901780 1051188 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 07:59:23.901784 1051188 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 07:59:23.901787 1051188 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 07:59:23.901794 1051188 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 07:59:23.901802 1051188 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 07:59:23.901807 1051188 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 07:59:23.901811 1051188 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 07:59:23.901815 1051188 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 07:59:23.901824 1051188 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 07:59:23.901844 1051188 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 07:59:23.901855 1051188 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 07:59:23.901860 1051188 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 07:59:23.901863 1051188 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 07:59:23.901876 1051188 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 07:59:23.901883 1051188 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 07:59:23.901893 1051188 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 07:59:23.901896 1051188 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 07:59:23.901899 1051188 cri.go:89] found id: ""
	I1123 07:59:23.901961 1051188 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:59:23.917470 1051188 out.go:203] 
	W1123 07:59:23.920377 1051188 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:59:23.920405 1051188 out.go:285] * 
	* 
	W1123 07:59:23.928515 1051188 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:59:23.931499 1051188 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-wn4d8" [aa7b7f5f-3387-4531-8b33-2460826f73a3] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005417813s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (324.011849ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:59:20.876166 1050690 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:59:20.876926 1050690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:20.876935 1050690 out.go:374] Setting ErrFile to fd 2...
	I1123 07:59:20.876941 1050690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:20.877213 1050690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 07:59:20.877582 1050690 mustload.go:66] Loading cluster: addons-782760
	I1123 07:59:20.877942 1050690 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:20.877952 1050690 addons.go:622] checking whether the cluster is paused
	I1123 07:59:20.878051 1050690 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:20.878061 1050690 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:59:20.878556 1050690 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:59:20.896508 1050690 ssh_runner.go:195] Run: systemctl --version
	I1123 07:59:20.896566 1050690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:59:20.923300 1050690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:59:21.030047 1050690 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:59:21.030141 1050690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:59:21.068186 1050690 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 07:59:21.068216 1050690 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 07:59:21.068221 1050690 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 07:59:21.068225 1050690 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 07:59:21.068228 1050690 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 07:59:21.068232 1050690 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 07:59:21.068235 1050690 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 07:59:21.068238 1050690 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 07:59:21.068241 1050690 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 07:59:21.068246 1050690 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 07:59:21.068250 1050690 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 07:59:21.068253 1050690 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 07:59:21.068256 1050690 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 07:59:21.068259 1050690 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 07:59:21.068262 1050690 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 07:59:21.068267 1050690 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 07:59:21.068270 1050690 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 07:59:21.068274 1050690 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 07:59:21.068277 1050690 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 07:59:21.068280 1050690 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 07:59:21.068286 1050690 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 07:59:21.068289 1050690 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 07:59:21.068292 1050690 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 07:59:21.068295 1050690 cri.go:89] found id: ""
	I1123 07:59:21.068349 1050690 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:59:21.085925 1050690 out.go:203] 
	W1123 07:59:21.088863 1050690 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:59:21.088885 1050690 out.go:285] * 
	* 
	W1123 07:59:21.096900 1050690 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:59:21.099967 1050690 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.33s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-782760 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-782760 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-782760 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [44bf05ce-a8c1-4035-bb77-19931d7edbb5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [44bf05ce-a8c1-4035-bb77-19931d7edbb5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [44bf05ce-a8c1-4035-bb77-19931d7edbb5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003630499s
addons_test.go:967: (dbg) Run:  kubectl --context addons-782760 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 ssh "cat /opt/local-path-provisioner/pvc-4edddf59-348f-4660-91bb-3a71fe1ac723_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-782760 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-782760 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (263.169568ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:59:20.312001 1050576 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:59:20.313517 1050576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:20.313534 1050576 out.go:374] Setting ErrFile to fd 2...
	I1123 07:59:20.313540 1050576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:20.313827 1050576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 07:59:20.314150 1050576 mustload.go:66] Loading cluster: addons-782760
	I1123 07:59:20.314587 1050576 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:20.314604 1050576 addons.go:622] checking whether the cluster is paused
	I1123 07:59:20.314765 1050576 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:20.314791 1050576 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:59:20.315333 1050576 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:59:20.332805 1050576 ssh_runner.go:195] Run: systemctl --version
	I1123 07:59:20.332875 1050576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:59:20.350753 1050576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:59:20.453685 1050576 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:59:20.453793 1050576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:59:20.487235 1050576 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 07:59:20.487255 1050576 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 07:59:20.487265 1050576 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 07:59:20.487270 1050576 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 07:59:20.487273 1050576 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 07:59:20.487277 1050576 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 07:59:20.487280 1050576 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 07:59:20.487289 1050576 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 07:59:20.487292 1050576 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 07:59:20.487299 1050576 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 07:59:20.487307 1050576 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 07:59:20.487310 1050576 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 07:59:20.487314 1050576 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 07:59:20.487322 1050576 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 07:59:20.487326 1050576 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 07:59:20.487331 1050576 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 07:59:20.487337 1050576 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 07:59:20.487340 1050576 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 07:59:20.487344 1050576 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 07:59:20.487347 1050576 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 07:59:20.487351 1050576 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 07:59:20.487364 1050576 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 07:59:20.487367 1050576 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 07:59:20.487369 1050576 cri.go:89] found id: ""
	I1123 07:59:20.487421 1050576 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:59:20.502675 1050576 out.go:203] 
	W1123 07:59:20.506087 1050576 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:59:20.506119 1050576 out.go:285] * 
	* 
	W1123 07:59:20.514234 1050576 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:59:20.517578 1050576 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-stqrq" [68a915e8-7aa3-479a-a75c-9cb582f7b791] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00311311s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (262.259505ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:59:11.946735 1050144 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:59:11.947633 1050144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:11.947916 1050144 out.go:374] Setting ErrFile to fd 2...
	I1123 07:59:11.947952 1050144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:11.948264 1050144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 07:59:11.948610 1050144 mustload.go:66] Loading cluster: addons-782760
	I1123 07:59:11.949055 1050144 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:11.949094 1050144 addons.go:622] checking whether the cluster is paused
	I1123 07:59:11.949241 1050144 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:11.949273 1050144 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:59:11.949806 1050144 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:59:11.976412 1050144 ssh_runner.go:195] Run: systemctl --version
	I1123 07:59:11.976473 1050144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:59:11.993926 1050144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:59:12.097784 1050144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:59:12.097889 1050144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:59:12.129712 1050144 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 07:59:12.129737 1050144 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 07:59:12.129742 1050144 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 07:59:12.129746 1050144 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 07:59:12.129749 1050144 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 07:59:12.129753 1050144 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 07:59:12.129757 1050144 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 07:59:12.129760 1050144 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 07:59:12.129770 1050144 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 07:59:12.129777 1050144 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 07:59:12.129781 1050144 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 07:59:12.129784 1050144 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 07:59:12.129791 1050144 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 07:59:12.129794 1050144 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 07:59:12.129797 1050144 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 07:59:12.129803 1050144 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 07:59:12.129812 1050144 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 07:59:12.129816 1050144 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 07:59:12.129819 1050144 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 07:59:12.129823 1050144 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 07:59:12.129827 1050144 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 07:59:12.129844 1050144 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 07:59:12.129848 1050144 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 07:59:12.129851 1050144 cri.go:89] found id: ""
	I1123 07:59:12.129934 1050144 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:59:12.143988 1050144 out.go:203] 
	W1123 07:59:12.145534 1050144 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:59:12.145552 1050144 out.go:285] * 
	* 
	W1123 07:59:12.153697 1050144 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:59:12.155327 1050144 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-6j7jv" [9f83e21d-806e-42c7-9996-9f968a9683ae] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003240759s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-782760 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-782760 addons disable yakd --alsologtostderr -v=1: exit status 11 (266.202276ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:59:06.677380 1050054 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:59:06.678707 1050054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:06.678725 1050054 out.go:374] Setting ErrFile to fd 2...
	I1123 07:59:06.678731 1050054 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:59:06.679075 1050054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 07:59:06.679500 1050054 mustload.go:66] Loading cluster: addons-782760
	I1123 07:59:06.679924 1050054 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:06.679944 1050054 addons.go:622] checking whether the cluster is paused
	I1123 07:59:06.680057 1050054 config.go:182] Loaded profile config "addons-782760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:59:06.680071 1050054 host.go:66] Checking if "addons-782760" exists ...
	I1123 07:59:06.680549 1050054 cli_runner.go:164] Run: docker container inspect addons-782760 --format={{.State.Status}}
	I1123 07:59:06.698551 1050054 ssh_runner.go:195] Run: systemctl --version
	I1123 07:59:06.698607 1050054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-782760
	I1123 07:59:06.717308 1050054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34227 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/addons-782760/id_rsa Username:docker}
	I1123 07:59:06.821801 1050054 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:59:06.821888 1050054 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:59:06.862544 1050054 cri.go:89] found id: "b7dbc42af3eaa55b87cc8920859061e757eb023e86e81249f165e03ab50e4242"
	I1123 07:59:06.862565 1050054 cri.go:89] found id: "e81b53e67dd69b5c11fd7296687e0873840c35bd3d9a0a362120bddf439d6c1b"
	I1123 07:59:06.862570 1050054 cri.go:89] found id: "25c0aa23665db233b369dab0d5441e57c0ce88fa6616d8cf7e6b835782338180"
	I1123 07:59:06.862574 1050054 cri.go:89] found id: "654a0f71268c2242c663c96bcf3824362a6b59fde36427f2178d5a6a7a40d822"
	I1123 07:59:06.862578 1050054 cri.go:89] found id: "05fe963f89f66768688e74774e00621a5f6cfcdb1fb13cf5f9f72be082d11a49"
	I1123 07:59:06.862581 1050054 cri.go:89] found id: "ff09ce175fe75259d6414ddd02e5948745625c2bbb202a6de931ef6f7a3dd631"
	I1123 07:59:06.862586 1050054 cri.go:89] found id: "7ca479867b2432892b7d17c86aa12ad6fee7b14dfa3af5e913666586727c22e5"
	I1123 07:59:06.862589 1050054 cri.go:89] found id: "35dd0f9bcb50a0d13664543c1e5ff8dac184175da2e417035c9bf88b4c70055c"
	I1123 07:59:06.862592 1050054 cri.go:89] found id: "90e12086b17a955a96fa28343672584a5d4f7e85965306622f66ff5c2f64668b"
	I1123 07:59:06.862598 1050054 cri.go:89] found id: "410c2359fb0c01d8f73a1fd70b1094ae44de6046b129327df1bd83c0d6337ebb"
	I1123 07:59:06.862601 1050054 cri.go:89] found id: "9311aa036bd97e236f7744a9e5ffd3e67d26ec0f771860cd871daaf5ef151735"
	I1123 07:59:06.862604 1050054 cri.go:89] found id: "1d4e31902581e865cf2387b39a5a9142c169c6e1eadf244cde62a11fb2d3bc71"
	I1123 07:59:06.862608 1050054 cri.go:89] found id: "9734ce796f3ef40aea74fe5b37f2070ba72c41a196839cde80dd0861b1465993"
	I1123 07:59:06.862611 1050054 cri.go:89] found id: "fb98b04224a9c4438cfa50aabef9ca321dde423db6b9e11c6ac1ef33927bce15"
	I1123 07:59:06.862614 1050054 cri.go:89] found id: "d2ffd09041ccf70f835af84256922f049edff6ce0aa5b926e7859efc43046a15"
	I1123 07:59:06.862619 1050054 cri.go:89] found id: "685798fa38932c34ea5b41c1b40649d3026a53a13752ea5bc0703dc6086e5d47"
	I1123 07:59:06.862629 1050054 cri.go:89] found id: "01a96c05c2e23fce327adec63f507ecc75154c56dc51b79294c0ada40f73d486"
	I1123 07:59:06.862634 1050054 cri.go:89] found id: "995c0ad221a0ea807ac716f43224f6603841c0abb322b78cd157d03df1535c45"
	I1123 07:59:06.862637 1050054 cri.go:89] found id: "d3d5fbc406391cea6bd05d6bf3e77708af72d668d9cf1f8f67553646b8ebd263"
	I1123 07:59:06.862640 1050054 cri.go:89] found id: "03fd92afca30f9b387a50e40f209a51d44d2219bf6337bbe9b4396831fce9ad8"
	I1123 07:59:06.862645 1050054 cri.go:89] found id: "7b54407c8a503487b0c75dba534bb8d12c3f658348cad08eeee8783e2002685a"
	I1123 07:59:06.862648 1050054 cri.go:89] found id: "4952e333e5cbca2ab975c1b717b23754934a25101ec680e6df940a3abe4aa3e3"
	I1123 07:59:06.862652 1050054 cri.go:89] found id: "1e9a39b963c81a6ff6ba191d66d478a513599130671d0996e8d442248af5eee3"
	I1123 07:59:06.862654 1050054 cri.go:89] found id: ""
	I1123 07:59:06.862708 1050054 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:59:06.876823 1050054 out.go:203] 
	W1123 07:59:06.878491 1050054 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:59:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:59:06.878514 1050054 out.go:285] * 
	* 
	W1123 07:59:06.886922 1050054 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:59:06.888257 1050054 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-782760 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-333688 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-333688 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-cmzr4" [087dfdf0-a046-4156-94e2-3901f709787e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-333688 -n functional-333688
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-23 08:16:20.257120537 +0000 UTC m=+1233.955881282
functional_test.go:1645: (dbg) Run:  kubectl --context functional-333688 describe po hello-node-connect-7d85dfc575-cmzr4 -n default
functional_test.go:1645: (dbg) kubectl --context functional-333688 describe po hello-node-connect-7d85dfc575-cmzr4 -n default:
Name:             hello-node-connect-7d85dfc575-cmzr4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-333688/192.168.49.2
Start Time:       Sun, 23 Nov 2025 08:06:19 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvfnx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vvfnx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cmzr4 to functional-333688
Normal   Pulling    7m15s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m15s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-333688 logs hello-node-connect-7d85dfc575-cmzr4 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-333688 logs hello-node-connect-7d85dfc575-cmzr4 -n default: exit status 1 (105.297153ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cmzr4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-333688 logs hello-node-connect-7d85dfc575-cmzr4 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-333688 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-cmzr4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-333688/192.168.49.2
Start Time:       Sun, 23 Nov 2025 08:06:19 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvfnx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vvfnx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cmzr4 to functional-333688
Normal   Pulling    7m15s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m15s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-333688 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-333688 logs -l app=hello-node-connect: exit status 1 (83.901149ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cmzr4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-333688 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-333688 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.127.77
IPs:                      10.109.127.77
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30131/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-333688
helpers_test.go:243: (dbg) docker inspect functional-333688:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40979da38799e9fe0fb64562d65f4db12f4d10b0555bc344d4c82007c9dedc39",
	        "Created": "2025-11-23T08:03:18.086860546Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1058854,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:03:18.148362377Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/40979da38799e9fe0fb64562d65f4db12f4d10b0555bc344d4c82007c9dedc39/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40979da38799e9fe0fb64562d65f4db12f4d10b0555bc344d4c82007c9dedc39/hostname",
	        "HostsPath": "/var/lib/docker/containers/40979da38799e9fe0fb64562d65f4db12f4d10b0555bc344d4c82007c9dedc39/hosts",
	        "LogPath": "/var/lib/docker/containers/40979da38799e9fe0fb64562d65f4db12f4d10b0555bc344d4c82007c9dedc39/40979da38799e9fe0fb64562d65f4db12f4d10b0555bc344d4c82007c9dedc39-json.log",
	        "Name": "/functional-333688",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-333688:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-333688",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "40979da38799e9fe0fb64562d65f4db12f4d10b0555bc344d4c82007c9dedc39",
	                "LowerDir": "/var/lib/docker/overlay2/e2aa9f42e9a6193603c902bbed364079e33daa5f589070ec495a4d8f6d750ad9-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2aa9f42e9a6193603c902bbed364079e33daa5f589070ec495a4d8f6d750ad9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2aa9f42e9a6193603c902bbed364079e33daa5f589070ec495a4d8f6d750ad9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2aa9f42e9a6193603c902bbed364079e33daa5f589070ec495a4d8f6d750ad9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-333688",
	                "Source": "/var/lib/docker/volumes/functional-333688/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-333688",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-333688",
	                "name.minikube.sigs.k8s.io": "functional-333688",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "59c7fa79dc16071d2264c789ff09a7c6272aeab4ef58c4e203a44a3081cc36be",
	            "SandboxKey": "/var/run/docker/netns/59c7fa79dc16",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34237"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34238"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34241"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34239"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34240"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-333688": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:5b:e8:b6:6e:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e054183d8c6d1ab64eb7762e9820b6f575f61bdd1e7db24a1888feaaa0879a26",
	                    "EndpointID": "d76eda3deb3bc3ddf985abc6762f37c19c21b20cdcf69b4ed25a6d857e366f68",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-333688",
	                        "40979da38799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-333688 -n functional-333688
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-333688 logs -n 25: (1.426444445s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-333688 ssh -n functional-333688 sudo cat /tmp/does/not/exist/cp-test.txt                                               │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ ssh     │ functional-333688 ssh echo hello                                                                                                  │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ ssh     │ functional-333688 ssh cat /etc/hostname                                                                                           │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ mount   │ -p functional-333688 /tmp/TestFunctionalparallelMountCmdany-port1789193375/001:/mount-9p --alsologtostderr -v=1                   │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │                     │
	│ ssh     │ functional-333688 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │                     │
	│ ssh     │ functional-333688 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ ssh     │ functional-333688 ssh -- ls -la /mount-9p                                                                                         │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ ssh     │ functional-333688 ssh cat /mount-9p/test-1763885166784446993                                                                      │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ ssh     │ functional-333688 ssh stat /mount-9p/created-by-test                                                                              │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ ssh     │ functional-333688 ssh stat /mount-9p/created-by-pod                                                                               │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ ssh     │ functional-333688 ssh sudo umount -f /mount-9p                                                                                    │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ ssh     │ functional-333688 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │                     │
	│ mount   │ -p functional-333688 /tmp/TestFunctionalparallelMountCmdspecific-port3244396986/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │                     │
	│ ssh     │ functional-333688 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ ssh     │ functional-333688 ssh -- ls -la /mount-9p                                                                                         │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ ssh     │ functional-333688 ssh sudo umount -f /mount-9p                                                                                    │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │                     │
	│ mount   │ -p functional-333688 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4279472702/001:/mount3 --alsologtostderr -v=1                │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │                     │
	│ mount   │ -p functional-333688 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4279472702/001:/mount1 --alsologtostderr -v=1                │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │                     │
	│ mount   │ -p functional-333688 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4279472702/001:/mount2 --alsologtostderr -v=1                │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │                     │
	│ ssh     │ functional-333688 ssh findmnt -T /mount1                                                                                          │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ ssh     │ functional-333688 ssh findmnt -T /mount2                                                                                          │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ ssh     │ functional-333688 ssh findmnt -T /mount3                                                                                          │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ mount   │ -p functional-333688 --kill=true                                                                                                  │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │                     │
	│ addons  │ functional-333688 addons list                                                                                                     │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	│ addons  │ functional-333688 addons list -o json                                                                                             │ functional-333688 │ jenkins │ v1.37.0 │ 23 Nov 25 08:06 UTC │ 23 Nov 25 08:06 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:05:08
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:05:08.881078 1063012 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:05:08.881183 1063012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:05:08.881187 1063012 out.go:374] Setting ErrFile to fd 2...
	I1123 08:05:08.881192 1063012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:05:08.881523 1063012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:05:08.881934 1063012 out.go:368] Setting JSON to false
	I1123 08:05:08.882813 1063012 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":31654,"bootTime":1763853455,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:05:08.882900 1063012 start.go:143] virtualization:  
	I1123 08:05:08.886270 1063012 out.go:179] * [functional-333688] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:05:08.889997 1063012 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:05:08.890155 1063012 notify.go:221] Checking for updates...
	I1123 08:05:08.895998 1063012 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:05:08.898973 1063012 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:05:08.901930 1063012 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:05:08.904863 1063012 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:05:08.907958 1063012 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:05:08.911505 1063012 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:05:08.911610 1063012 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:05:08.937331 1063012 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:05:08.937476 1063012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:05:09.000313 1063012 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-23 08:05:08.99083834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:05:09.000406 1063012 docker.go:319] overlay module found
	I1123 08:05:09.012000 1063012 out.go:179] * Using the docker driver based on existing profile
	I1123 08:05:09.014995 1063012 start.go:309] selected driver: docker
	I1123 08:05:09.015004 1063012 start.go:927] validating driver "docker" against &{Name:functional-333688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-333688 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:05:09.015110 1063012 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:05:09.015248 1063012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:05:09.080929 1063012 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-23 08:05:09.072649293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:05:09.081337 1063012 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:05:09.081368 1063012 cni.go:84] Creating CNI manager for ""
	I1123 08:05:09.081436 1063012 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:05:09.081472 1063012 start.go:353] cluster config:
	{Name:functional-333688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-333688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:05:09.086452 1063012 out.go:179] * Starting "functional-333688" primary control-plane node in "functional-333688" cluster
	I1123 08:05:09.089253 1063012 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:05:09.092236 1063012 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:05:09.095046 1063012 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:05:09.095093 1063012 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 08:05:09.095108 1063012 cache.go:65] Caching tarball of preloaded images
	I1123 08:05:09.095106 1063012 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:05:09.095228 1063012 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:05:09.095237 1063012 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:05:09.095346 1063012 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/config.json ...
	I1123 08:05:09.113656 1063012 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:05:09.113667 1063012 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:05:09.113688 1063012 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:05:09.113720 1063012 start.go:360] acquireMachinesLock for functional-333688: {Name:mkc06e54d6f66f5b75fe7ba1c9375243a24c582a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:05:09.113781 1063012 start.go:364] duration metric: took 46.3µs to acquireMachinesLock for "functional-333688"
	I1123 08:05:09.113798 1063012 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:05:09.113803 1063012 fix.go:54] fixHost starting: 
	I1123 08:05:09.114060 1063012 cli_runner.go:164] Run: docker container inspect functional-333688 --format={{.State.Status}}
	I1123 08:05:09.130093 1063012 fix.go:112] recreateIfNeeded on functional-333688: state=Running err=<nil>
	W1123 08:05:09.130116 1063012 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:05:09.133209 1063012 out.go:252] * Updating the running docker "functional-333688" container ...
	I1123 08:05:09.133236 1063012 machine.go:94] provisionDockerMachine start ...
	I1123 08:05:09.133329 1063012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
	I1123 08:05:09.149886 1063012 main.go:143] libmachine: Using SSH client type: native
	I1123 08:05:09.150196 1063012 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34237 <nil> <nil>}
	I1123 08:05:09.150201 1063012 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:05:09.298643 1063012 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-333688
	
	I1123 08:05:09.298657 1063012 ubuntu.go:182] provisioning hostname "functional-333688"
	I1123 08:05:09.298728 1063012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
	I1123 08:05:09.315599 1063012 main.go:143] libmachine: Using SSH client type: native
	I1123 08:05:09.315905 1063012 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34237 <nil> <nil>}
	I1123 08:05:09.315915 1063012 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-333688 && echo "functional-333688" | sudo tee /etc/hostname
	I1123 08:05:09.471516 1063012 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-333688
	
	I1123 08:05:09.471586 1063012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
	I1123 08:05:09.488881 1063012 main.go:143] libmachine: Using SSH client type: native
	I1123 08:05:09.489204 1063012 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34237 <nil> <nil>}
	I1123 08:05:09.489217 1063012 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-333688' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-333688/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-333688' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:05:09.639313 1063012 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:05:09.639343 1063012 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:05:09.639362 1063012 ubuntu.go:190] setting up certificates
	I1123 08:05:09.639369 1063012 provision.go:84] configureAuth start
	I1123 08:05:09.639451 1063012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-333688
	I1123 08:05:09.656552 1063012 provision.go:143] copyHostCerts
	I1123 08:05:09.656623 1063012 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:05:09.656631 1063012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:05:09.656705 1063012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:05:09.656805 1063012 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:05:09.656809 1063012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:05:09.656834 1063012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:05:09.656890 1063012 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:05:09.656893 1063012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:05:09.656914 1063012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:05:09.656970 1063012 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.functional-333688 san=[127.0.0.1 192.168.49.2 functional-333688 localhost minikube]
	I1123 08:05:09.835835 1063012 provision.go:177] copyRemoteCerts
	I1123 08:05:09.835932 1063012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:05:09.835986 1063012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
	I1123 08:05:09.852170 1063012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34237 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/functional-333688/id_rsa Username:docker}
	I1123 08:05:09.954523 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:05:09.970403 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:05:09.986786 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:05:10.012299 1063012 provision.go:87] duration metric: took 372.897176ms to configureAuth
	I1123 08:05:10.012319 1063012 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:05:10.012572 1063012 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:05:10.012681 1063012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
	I1123 08:05:10.040047 1063012 main.go:143] libmachine: Using SSH client type: native
	I1123 08:05:10.040352 1063012 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34237 <nil> <nil>}
	I1123 08:05:10.040363 1063012 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:05:15.458467 1063012 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:05:15.458479 1063012 machine.go:97] duration metric: took 6.325237553s to provisionDockerMachine
	I1123 08:05:15.458489 1063012 start.go:293] postStartSetup for "functional-333688" (driver="docker")
	I1123 08:05:15.458499 1063012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:05:15.458557 1063012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:05:15.458609 1063012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
	I1123 08:05:15.482880 1063012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34237 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/functional-333688/id_rsa Username:docker}
	I1123 08:05:15.586760 1063012 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:05:15.589871 1063012 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:05:15.589889 1063012 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:05:15.589899 1063012 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:05:15.589953 1063012 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:05:15.590031 1063012 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:05:15.590109 1063012 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/test/nested/copy/1043159/hosts -> hosts in /etc/test/nested/copy/1043159
	I1123 08:05:15.590151 1063012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1043159
	I1123 08:05:15.597262 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:05:15.613924 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/test/nested/copy/1043159/hosts --> /etc/test/nested/copy/1043159/hosts (40 bytes)
	I1123 08:05:15.630999 1063012 start.go:296] duration metric: took 172.496392ms for postStartSetup
	I1123 08:05:15.631066 1063012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:05:15.631102 1063012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
	I1123 08:05:15.647943 1063012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34237 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/functional-333688/id_rsa Username:docker}
	I1123 08:05:15.748434 1063012 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:05:15.753023 1063012 fix.go:56] duration metric: took 6.639212335s for fixHost
	I1123 08:05:15.753038 1063012 start.go:83] releasing machines lock for "functional-333688", held for 6.639249873s
	I1123 08:05:15.753105 1063012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-333688
	I1123 08:05:15.770720 1063012 ssh_runner.go:195] Run: cat /version.json
	I1123 08:05:15.770741 1063012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:05:15.770760 1063012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
	I1123 08:05:15.770796 1063012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
	I1123 08:05:15.792270 1063012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34237 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/functional-333688/id_rsa Username:docker}
	I1123 08:05:15.795518 1063012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34237 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/functional-333688/id_rsa Username:docker}
	I1123 08:05:15.985335 1063012 ssh_runner.go:195] Run: systemctl --version
	I1123 08:05:15.992228 1063012 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:05:16.032226 1063012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:05:16.036969 1063012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:05:16.037032 1063012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:05:16.045078 1063012 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:05:16.045092 1063012 start.go:496] detecting cgroup driver to use...
	I1123 08:05:16.045124 1063012 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:05:16.045173 1063012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:05:16.061725 1063012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:05:16.075174 1063012 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:05:16.075338 1063012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:05:16.091810 1063012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:05:16.105123 1063012 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:05:16.236312 1063012 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:05:16.374740 1063012 docker.go:234] disabling docker service ...
	I1123 08:05:16.374795 1063012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:05:16.390427 1063012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:05:16.403988 1063012 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:05:16.547942 1063012 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:05:16.693865 1063012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:05:16.707692 1063012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:05:16.725603 1063012 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:05:16.725697 1063012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:05:16.735811 1063012 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:05:16.735885 1063012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:05:16.745678 1063012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:05:16.757375 1063012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:05:16.767233 1063012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:05:16.775908 1063012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:05:16.785377 1063012 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:05:16.793607 1063012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:05:16.802226 1063012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:05:16.809712 1063012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:05:16.817038 1063012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:05:16.943494 1063012 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:05:23.192257 1063012 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.248739201s)
	I1123 08:05:23.192273 1063012 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:05:23.192322 1063012 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:05:23.196280 1063012 start.go:564] Will wait 60s for crictl version
	I1123 08:05:23.196329 1063012 ssh_runner.go:195] Run: which crictl
	I1123 08:05:23.199676 1063012 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:05:23.227825 1063012 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:05:23.227904 1063012 ssh_runner.go:195] Run: crio --version
	I1123 08:05:23.256152 1063012 ssh_runner.go:195] Run: crio --version
	I1123 08:05:23.286781 1063012 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:05:23.289780 1063012 cli_runner.go:164] Run: docker network inspect functional-333688 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:05:23.305260 1063012 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 08:05:23.312495 1063012 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1123 08:05:23.315340 1063012 kubeadm.go:884] updating cluster {Name:functional-333688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-333688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:05:23.315463 1063012 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:05:23.315525 1063012 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:05:23.348499 1063012 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:05:23.348511 1063012 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:05:23.348566 1063012 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:05:23.373477 1063012 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:05:23.373488 1063012 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:05:23.373494 1063012 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1123 08:05:23.373591 1063012 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-333688 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-333688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:05:23.373669 1063012 ssh_runner.go:195] Run: crio config
	I1123 08:05:23.448496 1063012 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1123 08:05:23.448515 1063012 cni.go:84] Creating CNI manager for ""
	I1123 08:05:23.448524 1063012 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:05:23.448532 1063012 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:05:23.448552 1063012 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-333688 NodeName:functional-333688 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:05:23.448668 1063012 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-333688"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:05:23.448742 1063012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:05:23.456430 1063012 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:05:23.456508 1063012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:05:23.464046 1063012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 08:05:23.476731 1063012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:05:23.489230 1063012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1123 08:05:23.502147 1063012 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:05:23.505814 1063012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:05:23.645372 1063012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:05:23.658042 1063012 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688 for IP: 192.168.49.2
	I1123 08:05:23.658052 1063012 certs.go:195] generating shared ca certs ...
	I1123 08:05:23.658073 1063012 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:05:23.658202 1063012 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:05:23.658274 1063012 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:05:23.658281 1063012 certs.go:257] generating profile certs ...
	I1123 08:05:23.658367 1063012 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.key
	I1123 08:05:23.658410 1063012 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/apiserver.key.7dd1eea5
	I1123 08:05:23.658457 1063012 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/proxy-client.key
	I1123 08:05:23.658569 1063012 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:05:23.658598 1063012 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:05:23.658605 1063012 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:05:23.658633 1063012 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:05:23.658660 1063012 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:05:23.658682 1063012 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:05:23.658723 1063012 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:05:23.659414 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:05:23.677217 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:05:23.694860 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:05:23.712298 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:05:23.730802 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:05:23.747483 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:05:23.764946 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:05:23.782143 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:05:23.798750 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 08:05:23.816229 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:05:23.833686 1063012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:05:23.852533 1063012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:05:23.865516 1063012 ssh_runner.go:195] Run: openssl version
	I1123 08:05:23.871580 1063012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:05:23.879886 1063012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:05:23.883694 1063012 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:05:23.883751 1063012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:05:23.926899 1063012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:05:23.934703 1063012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:05:23.942721 1063012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:05:23.946141 1063012 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:05:23.946192 1063012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:05:23.986767 1063012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:05:23.994516 1063012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:05:24.002652 1063012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:05:24.008733 1063012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:05:24.008796 1063012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:05:24.050764 1063012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:05:24.059205 1063012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:05:24.063023 1063012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:05:24.106039 1063012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:05:24.146873 1063012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:05:24.187447 1063012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:05:24.227874 1063012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:05:24.268411 1063012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:05:24.308833 1063012 kubeadm.go:401] StartCluster: {Name:functional-333688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-333688 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:05:24.308913 1063012 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:05:24.308973 1063012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:05:24.339308 1063012 cri.go:89] found id: "a54b7fd3e345cc2aea80a3732aee708cbf6030f853c0bfcb12aaa320fdb049fa"
	I1123 08:05:24.339319 1063012 cri.go:89] found id: "086f4eb776c634c1563bd0b838aa3be82785d66ece064f583b9dbc3e77d3b903"
	I1123 08:05:24.339338 1063012 cri.go:89] found id: "99cedab36358c443eb233291e3adc03af663b7c4ee2ab0aae0911b2f4cda61ee"
	I1123 08:05:24.339341 1063012 cri.go:89] found id: "d78d1e1775e287b4f02482d6951781f4cead22a00b484276a9f029e216321032"
	I1123 08:05:24.339344 1063012 cri.go:89] found id: "ac41ca0409779880d25569c22f2a59275a129c9a634e92dd553fbe8db5322cdb"
	I1123 08:05:24.339346 1063012 cri.go:89] found id: "c6c139eed4abd5b9e3a7856b8d06e9fd67ca48e3fc53c487dc266ac151134f30"
	I1123 08:05:24.339348 1063012 cri.go:89] found id: "8a676a87e55ff5ce48f84d72550bf3e49b9a3a1f680f2de7f0e6dd7ad2258061"
	I1123 08:05:24.339350 1063012 cri.go:89] found id: "c7f281a298cfe323612de79580ca505f8206cea7c9227751192d3966c40068ce"
	I1123 08:05:24.339352 1063012 cri.go:89] found id: "65b2c2e681afe1ac92cce8ecce32c490932ce5aa5de460a367af2638c12238a3"
	I1123 08:05:24.339358 1063012 cri.go:89] found id: "55bf96681373f3eeb2c414db7a6842cd3dec4ff85d75c039e61ca184767f7d5e"
	I1123 08:05:24.339360 1063012 cri.go:89] found id: "9068d0ccc7de84166f963ac3b9c6bb4a24efdc4ba739e1ec41c885b95838f07f"
	I1123 08:05:24.339362 1063012 cri.go:89] found id: "e7d1383f8c4f9ca0b9c5f1824a8988b691f17d5262080a19720b7778619766de"
	I1123 08:05:24.339364 1063012 cri.go:89] found id: "aeaf15336592f98325e404ff45d0f4db2ce1c064d987e55dedc947cc4b0d41c3"
	I1123 08:05:24.339366 1063012 cri.go:89] found id: "50fe9e3f0b72e547b7ad0037b92067a0f58749f2f9453e17fff91047806e7530"
	I1123 08:05:24.339368 1063012 cri.go:89] found id: "1b478b914c08c887f32eee1ae7a55acad051096cc9f16ada9b206fe1fe80d919"
	I1123 08:05:24.339372 1063012 cri.go:89] found id: "c9dd539ada2837406c311c1cf0c3b1791e62b29905a696e4e76d03662571b2a0"
	I1123 08:05:24.339374 1063012 cri.go:89] found id: ""
	I1123 08:05:24.339422 1063012 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 08:05:24.350015 1063012 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:05:24Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:05:24.350074 1063012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:05:24.357504 1063012 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:05:24.357513 1063012 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:05:24.357562 1063012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:05:24.364785 1063012 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:05:24.365282 1063012 kubeconfig.go:125] found "functional-333688" server: "https://192.168.49.2:8441"
	I1123 08:05:24.366608 1063012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:05:24.375261 1063012 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-23 08:03:26.478862271 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-23 08:05:23.495515859 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1123 08:05:24.375270 1063012 kubeadm.go:1161] stopping kube-system containers ...
	I1123 08:05:24.375284 1063012 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1123 08:05:24.375354 1063012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:05:24.402100 1063012 cri.go:89] found id: "a54b7fd3e345cc2aea80a3732aee708cbf6030f853c0bfcb12aaa320fdb049fa"
	I1123 08:05:24.402110 1063012 cri.go:89] found id: "086f4eb776c634c1563bd0b838aa3be82785d66ece064f583b9dbc3e77d3b903"
	I1123 08:05:24.402114 1063012 cri.go:89] found id: "99cedab36358c443eb233291e3adc03af663b7c4ee2ab0aae0911b2f4cda61ee"
	I1123 08:05:24.402116 1063012 cri.go:89] found id: "d78d1e1775e287b4f02482d6951781f4cead22a00b484276a9f029e216321032"
	I1123 08:05:24.402119 1063012 cri.go:89] found id: "ac41ca0409779880d25569c22f2a59275a129c9a634e92dd553fbe8db5322cdb"
	I1123 08:05:24.402122 1063012 cri.go:89] found id: "c6c139eed4abd5b9e3a7856b8d06e9fd67ca48e3fc53c487dc266ac151134f30"
	I1123 08:05:24.402125 1063012 cri.go:89] found id: "8a676a87e55ff5ce48f84d72550bf3e49b9a3a1f680f2de7f0e6dd7ad2258061"
	I1123 08:05:24.402127 1063012 cri.go:89] found id: "c7f281a298cfe323612de79580ca505f8206cea7c9227751192d3966c40068ce"
	I1123 08:05:24.402129 1063012 cri.go:89] found id: "65b2c2e681afe1ac92cce8ecce32c490932ce5aa5de460a367af2638c12238a3"
	I1123 08:05:24.402137 1063012 cri.go:89] found id: "55bf96681373f3eeb2c414db7a6842cd3dec4ff85d75c039e61ca184767f7d5e"
	I1123 08:05:24.402139 1063012 cri.go:89] found id: "9068d0ccc7de84166f963ac3b9c6bb4a24efdc4ba739e1ec41c885b95838f07f"
	I1123 08:05:24.402141 1063012 cri.go:89] found id: "e7d1383f8c4f9ca0b9c5f1824a8988b691f17d5262080a19720b7778619766de"
	I1123 08:05:24.402144 1063012 cri.go:89] found id: "aeaf15336592f98325e404ff45d0f4db2ce1c064d987e55dedc947cc4b0d41c3"
	I1123 08:05:24.402146 1063012 cri.go:89] found id: "50fe9e3f0b72e547b7ad0037b92067a0f58749f2f9453e17fff91047806e7530"
	I1123 08:05:24.402148 1063012 cri.go:89] found id: "1b478b914c08c887f32eee1ae7a55acad051096cc9f16ada9b206fe1fe80d919"
	I1123 08:05:24.402185 1063012 cri.go:89] found id: "c9dd539ada2837406c311c1cf0c3b1791e62b29905a696e4e76d03662571b2a0"
	I1123 08:05:24.402193 1063012 cri.go:89] found id: ""
	I1123 08:05:24.402198 1063012 cri.go:252] Stopping containers: [a54b7fd3e345cc2aea80a3732aee708cbf6030f853c0bfcb12aaa320fdb049fa 086f4eb776c634c1563bd0b838aa3be82785d66ece064f583b9dbc3e77d3b903 99cedab36358c443eb233291e3adc03af663b7c4ee2ab0aae0911b2f4cda61ee d78d1e1775e287b4f02482d6951781f4cead22a00b484276a9f029e216321032 ac41ca0409779880d25569c22f2a59275a129c9a634e92dd553fbe8db5322cdb c6c139eed4abd5b9e3a7856b8d06e9fd67ca48e3fc53c487dc266ac151134f30 8a676a87e55ff5ce48f84d72550bf3e49b9a3a1f680f2de7f0e6dd7ad2258061 c7f281a298cfe323612de79580ca505f8206cea7c9227751192d3966c40068ce 65b2c2e681afe1ac92cce8ecce32c490932ce5aa5de460a367af2638c12238a3 55bf96681373f3eeb2c414db7a6842cd3dec4ff85d75c039e61ca184767f7d5e 9068d0ccc7de84166f963ac3b9c6bb4a24efdc4ba739e1ec41c885b95838f07f e7d1383f8c4f9ca0b9c5f1824a8988b691f17d5262080a19720b7778619766de aeaf15336592f98325e404ff45d0f4db2ce1c064d987e55dedc947cc4b0d41c3 50fe9e3f0b72e547b7ad0037b92067a0f58749f2f9453e17fff91047806e7530 1b478b914c08c887f32eee1ae7a55acad051096cc
9f16ada9b206fe1fe80d919 c9dd539ada2837406c311c1cf0c3b1791e62b29905a696e4e76d03662571b2a0]
	I1123 08:05:24.402261 1063012 ssh_runner.go:195] Run: which crictl
	I1123 08:05:24.405699 1063012 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 a54b7fd3e345cc2aea80a3732aee708cbf6030f853c0bfcb12aaa320fdb049fa 086f4eb776c634c1563bd0b838aa3be82785d66ece064f583b9dbc3e77d3b903 99cedab36358c443eb233291e3adc03af663b7c4ee2ab0aae0911b2f4cda61ee d78d1e1775e287b4f02482d6951781f4cead22a00b484276a9f029e216321032 ac41ca0409779880d25569c22f2a59275a129c9a634e92dd553fbe8db5322cdb c6c139eed4abd5b9e3a7856b8d06e9fd67ca48e3fc53c487dc266ac151134f30 8a676a87e55ff5ce48f84d72550bf3e49b9a3a1f680f2de7f0e6dd7ad2258061 c7f281a298cfe323612de79580ca505f8206cea7c9227751192d3966c40068ce 65b2c2e681afe1ac92cce8ecce32c490932ce5aa5de460a367af2638c12238a3 55bf96681373f3eeb2c414db7a6842cd3dec4ff85d75c039e61ca184767f7d5e 9068d0ccc7de84166f963ac3b9c6bb4a24efdc4ba739e1ec41c885b95838f07f e7d1383f8c4f9ca0b9c5f1824a8988b691f17d5262080a19720b7778619766de aeaf15336592f98325e404ff45d0f4db2ce1c064d987e55dedc947cc4b0d41c3 50fe9e3f0b72e547b7ad0037b92067a0f58749f2f9453e17fff91047806e7530 1b478b
914c08c887f32eee1ae7a55acad051096cc9f16ada9b206fe1fe80d919 c9dd539ada2837406c311c1cf0c3b1791e62b29905a696e4e76d03662571b2a0
	I1123 08:05:24.509321 1063012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1123 08:05:24.626075 1063012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:05:24.633866 1063012 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 23 08:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 23 08:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 23 08:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Nov 23 08:03 /etc/kubernetes/scheduler.conf
	
	I1123 08:05:24.633929 1063012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1123 08:05:24.642255 1063012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1123 08:05:24.649686 1063012 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:05:24.649740 1063012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:05:24.656851 1063012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1123 08:05:24.663893 1063012 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:05:24.663947 1063012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:05:24.670968 1063012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1123 08:05:24.678203 1063012 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:05:24.678255 1063012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:05:24.685413 1063012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:05:24.692859 1063012 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 08:05:24.737975 1063012 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 08:05:27.510726 1063012 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.772726602s)
	I1123 08:05:27.510786 1063012 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1123 08:05:27.730214 1063012 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 08:05:27.794243 1063012 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1123 08:05:27.872136 1063012 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:05:27.872201 1063012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:05:28.373147 1063012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:05:28.872422 1063012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:05:28.888903 1063012 api_server.go:72] duration metric: took 1.016766343s to wait for apiserver process to appear ...
	I1123 08:05:28.888918 1063012 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:05:28.888936 1063012 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 08:05:32.415813 1063012 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 08:05:32.415828 1063012 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 08:05:32.415839 1063012 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 08:05:32.562394 1063012 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:05:32.562419 1063012 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:05:32.889741 1063012 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 08:05:32.906815 1063012 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:05:32.906833 1063012 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:05:33.389387 1063012 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 08:05:33.398858 1063012 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:05:33.398892 1063012 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:05:33.889663 1063012 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 08:05:33.898055 1063012 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1123 08:05:33.912817 1063012 api_server.go:141] control plane version: v1.34.1
	I1123 08:05:33.912834 1063012 api_server.go:131] duration metric: took 5.023910902s to wait for apiserver health ...
	I1123 08:05:33.912842 1063012 cni.go:84] Creating CNI manager for ""
	I1123 08:05:33.912847 1063012 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:05:33.916230 1063012 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:05:33.919262 1063012 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:05:33.923119 1063012 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:05:33.923129 1063012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:05:33.935670 1063012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:05:34.381918 1063012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:05:34.385317 1063012 system_pods.go:59] 8 kube-system pods found
	I1123 08:05:34.385337 1063012 system_pods.go:61] "coredns-66bc5c9577-9dwq5" [f7842bcd-c4d5-4675-a661-f011d3ef3278] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:05:34.385344 1063012 system_pods.go:61] "etcd-functional-333688" [6c016e17-e68e-4fb7-aade-b21ce2dab2f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:05:34.385348 1063012 system_pods.go:61] "kindnet-zvxq4" [9f80069d-be94-4b24-a5ee-e7c8215c55bf] Running
	I1123 08:05:34.385355 1063012 system_pods.go:61] "kube-apiserver-functional-333688" [8159f5cb-2db0-4001-8033-a7413783a65e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:05:34.385360 1063012 system_pods.go:61] "kube-controller-manager-functional-333688" [ab00e6a5-28c0-4cbb-9412-245188f531da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:05:34.385364 1063012 system_pods.go:61] "kube-proxy-7stn4" [61e3b92f-2be3-4aa5-89a4-269d7a8e6b4b] Running
	I1123 08:05:34.385368 1063012 system_pods.go:61] "kube-scheduler-functional-333688" [7892d5de-b8d8-47ed-8f24-cde9afeda064] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:05:34.385371 1063012 system_pods.go:61] "storage-provisioner" [5a59492a-a4e6-45f9-81fd-5e93313ded3b] Running
	I1123 08:05:34.385375 1063012 system_pods.go:74] duration metric: took 3.447828ms to wait for pod list to return data ...
	I1123 08:05:34.385381 1063012 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:05:34.388231 1063012 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:05:34.388248 1063012 node_conditions.go:123] node cpu capacity is 2
	I1123 08:05:34.388258 1063012 node_conditions.go:105] duration metric: took 2.873461ms to run NodePressure ...
	I1123 08:05:34.388318 1063012 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 08:05:34.661001 1063012 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1123 08:05:34.669033 1063012 kubeadm.go:744] kubelet initialised
	I1123 08:05:34.669044 1063012 kubeadm.go:745] duration metric: took 8.030556ms waiting for restarted kubelet to initialise ...
	I1123 08:05:34.669058 1063012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:05:34.684278 1063012 ops.go:34] apiserver oom_adj: -16
	I1123 08:05:34.684289 1063012 kubeadm.go:602] duration metric: took 10.326770923s to restartPrimaryControlPlane
	I1123 08:05:34.684296 1063012 kubeadm.go:403] duration metric: took 10.375473785s to StartCluster
	I1123 08:05:34.684311 1063012 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:05:34.684375 1063012 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:05:34.684972 1063012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:05:34.685189 1063012 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:05:34.685514 1063012 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:05:34.685568 1063012 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:05:34.685676 1063012 addons.go:70] Setting storage-provisioner=true in profile "functional-333688"
	I1123 08:05:34.685688 1063012 addons.go:239] Setting addon storage-provisioner=true in "functional-333688"
	W1123 08:05:34.685693 1063012 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:05:34.685696 1063012 addons.go:70] Setting default-storageclass=true in profile "functional-333688"
	I1123 08:05:34.685725 1063012 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-333688"
	I1123 08:05:34.685712 1063012 host.go:66] Checking if "functional-333688" exists ...
	I1123 08:05:34.686074 1063012 cli_runner.go:164] Run: docker container inspect functional-333688 --format={{.State.Status}}
	I1123 08:05:34.686263 1063012 cli_runner.go:164] Run: docker container inspect functional-333688 --format={{.State.Status}}
	I1123 08:05:34.688723 1063012 out.go:179] * Verifying Kubernetes components...
	I1123 08:05:34.697769 1063012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:05:34.719284 1063012 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:05:34.722296 1063012 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:05:34.722307 1063012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:05:34.722383 1063012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
	I1123 08:05:34.737535 1063012 addons.go:239] Setting addon default-storageclass=true in "functional-333688"
	W1123 08:05:34.737546 1063012 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:05:34.737568 1063012 host.go:66] Checking if "functional-333688" exists ...
	I1123 08:05:34.738021 1063012 cli_runner.go:164] Run: docker container inspect functional-333688 --format={{.State.Status}}
	I1123 08:05:34.763903 1063012 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:05:34.763915 1063012 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:05:34.763973 1063012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
	I1123 08:05:34.784694 1063012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34237 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/functional-333688/id_rsa Username:docker}
	I1123 08:05:34.800473 1063012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34237 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/functional-333688/id_rsa Username:docker}
	I1123 08:05:34.937272 1063012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:05:34.944001 1063012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:05:34.957447 1063012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:05:35.714700 1063012 node_ready.go:35] waiting up to 6m0s for node "functional-333688" to be "Ready" ...
	I1123 08:05:35.717744 1063012 node_ready.go:49] node "functional-333688" is "Ready"
	I1123 08:05:35.717757 1063012 node_ready.go:38] duration metric: took 3.02914ms for node "functional-333688" to be "Ready" ...
	I1123 08:05:35.717768 1063012 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:05:35.717826 1063012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:05:35.729203 1063012 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:05:35.732006 1063012 addons.go:530] duration metric: took 1.046447923s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:05:35.732108 1063012 api_server.go:72] duration metric: took 1.046897124s to wait for apiserver process to appear ...
	I1123 08:05:35.732117 1063012 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:05:35.732135 1063012 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 08:05:35.741216 1063012 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1123 08:05:35.742174 1063012 api_server.go:141] control plane version: v1.34.1
	I1123 08:05:35.742187 1063012 api_server.go:131] duration metric: took 10.06372ms to wait for apiserver health ...
	I1123 08:05:35.742194 1063012 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:05:35.744881 1063012 system_pods.go:59] 8 kube-system pods found
	I1123 08:05:35.744894 1063012 system_pods.go:61] "coredns-66bc5c9577-9dwq5" [f7842bcd-c4d5-4675-a661-f011d3ef3278] Running
	I1123 08:05:35.744902 1063012 system_pods.go:61] "etcd-functional-333688" [6c016e17-e68e-4fb7-aade-b21ce2dab2f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:05:35.744905 1063012 system_pods.go:61] "kindnet-zvxq4" [9f80069d-be94-4b24-a5ee-e7c8215c55bf] Running
	I1123 08:05:35.744912 1063012 system_pods.go:61] "kube-apiserver-functional-333688" [8159f5cb-2db0-4001-8033-a7413783a65e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:05:35.744920 1063012 system_pods.go:61] "kube-controller-manager-functional-333688" [ab00e6a5-28c0-4cbb-9412-245188f531da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:05:35.744923 1063012 system_pods.go:61] "kube-proxy-7stn4" [61e3b92f-2be3-4aa5-89a4-269d7a8e6b4b] Running
	I1123 08:05:35.744928 1063012 system_pods.go:61] "kube-scheduler-functional-333688" [7892d5de-b8d8-47ed-8f24-cde9afeda064] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:05:35.744931 1063012 system_pods.go:61] "storage-provisioner" [5a59492a-a4e6-45f9-81fd-5e93313ded3b] Running
	I1123 08:05:35.744935 1063012 system_pods.go:74] duration metric: took 2.736562ms to wait for pod list to return data ...
	I1123 08:05:35.744940 1063012 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:05:35.746944 1063012 default_sa.go:45] found service account: "default"
	I1123 08:05:35.746954 1063012 default_sa.go:55] duration metric: took 2.009723ms for default service account to be created ...
	I1123 08:05:35.746960 1063012 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:05:35.749828 1063012 system_pods.go:86] 8 kube-system pods found
	I1123 08:05:35.749841 1063012 system_pods.go:89] "coredns-66bc5c9577-9dwq5" [f7842bcd-c4d5-4675-a661-f011d3ef3278] Running
	I1123 08:05:35.749848 1063012 system_pods.go:89] "etcd-functional-333688" [6c016e17-e68e-4fb7-aade-b21ce2dab2f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:05:35.749852 1063012 system_pods.go:89] "kindnet-zvxq4" [9f80069d-be94-4b24-a5ee-e7c8215c55bf] Running
	I1123 08:05:35.749858 1063012 system_pods.go:89] "kube-apiserver-functional-333688" [8159f5cb-2db0-4001-8033-a7413783a65e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:05:35.749864 1063012 system_pods.go:89] "kube-controller-manager-functional-333688" [ab00e6a5-28c0-4cbb-9412-245188f531da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:05:35.749867 1063012 system_pods.go:89] "kube-proxy-7stn4" [61e3b92f-2be3-4aa5-89a4-269d7a8e6b4b] Running
	I1123 08:05:35.749872 1063012 system_pods.go:89] "kube-scheduler-functional-333688" [7892d5de-b8d8-47ed-8f24-cde9afeda064] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:05:35.749876 1063012 system_pods.go:89] "storage-provisioner" [5a59492a-a4e6-45f9-81fd-5e93313ded3b] Running
	I1123 08:05:35.749882 1063012 system_pods.go:126] duration metric: took 2.917923ms to wait for k8s-apps to be running ...
	I1123 08:05:35.749888 1063012 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:05:35.749945 1063012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:05:35.763665 1063012 system_svc.go:56] duration metric: took 13.767344ms WaitForService to wait for kubelet
	I1123 08:05:35.763683 1063012 kubeadm.go:587] duration metric: took 1.078473665s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:05:35.763698 1063012 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:05:35.766241 1063012 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:05:35.766254 1063012 node_conditions.go:123] node cpu capacity is 2
	I1123 08:05:35.766263 1063012 node_conditions.go:105] duration metric: took 2.560616ms to run NodePressure ...
	I1123 08:05:35.766274 1063012 start.go:242] waiting for startup goroutines ...
	I1123 08:05:35.766280 1063012 start.go:247] waiting for cluster config update ...
	I1123 08:05:35.766290 1063012 start.go:256] writing updated cluster config ...
	I1123 08:05:35.766595 1063012 ssh_runner.go:195] Run: rm -f paused
	I1123 08:05:35.769960 1063012 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:05:35.773149 1063012 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9dwq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:05:35.777437 1063012 pod_ready.go:94] pod "coredns-66bc5c9577-9dwq5" is "Ready"
	I1123 08:05:35.777449 1063012 pod_ready.go:86] duration metric: took 4.288428ms for pod "coredns-66bc5c9577-9dwq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:05:35.779632 1063012 pod_ready.go:83] waiting for pod "etcd-functional-333688" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:05:37.784975 1063012 pod_ready.go:104] pod "etcd-functional-333688" is not "Ready", error: <nil>
	I1123 08:05:39.286496 1063012 pod_ready.go:94] pod "etcd-functional-333688" is "Ready"
	I1123 08:05:39.286511 1063012 pod_ready.go:86] duration metric: took 3.5068688s for pod "etcd-functional-333688" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:05:39.288902 1063012 pod_ready.go:83] waiting for pod "kube-apiserver-functional-333688" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:05:41.294849 1063012 pod_ready.go:104] pod "kube-apiserver-functional-333688" is not "Ready", error: <nil>
	W1123 08:05:43.793982 1063012 pod_ready.go:104] pod "kube-apiserver-functional-333688" is not "Ready", error: <nil>
	W1123 08:05:45.795744 1063012 pod_ready.go:104] pod "kube-apiserver-functional-333688" is not "Ready", error: <nil>
	I1123 08:05:47.794460 1063012 pod_ready.go:94] pod "kube-apiserver-functional-333688" is "Ready"
	I1123 08:05:47.794474 1063012 pod_ready.go:86] duration metric: took 8.505560893s for pod "kube-apiserver-functional-333688" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:05:47.796710 1063012 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-333688" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:05:47.801239 1063012 pod_ready.go:94] pod "kube-controller-manager-functional-333688" is "Ready"
	I1123 08:05:47.801251 1063012 pod_ready.go:86] duration metric: took 4.529118ms for pod "kube-controller-manager-functional-333688" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:05:47.803367 1063012 pod_ready.go:83] waiting for pod "kube-proxy-7stn4" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:05:47.808095 1063012 pod_ready.go:94] pod "kube-proxy-7stn4" is "Ready"
	I1123 08:05:47.808108 1063012 pod_ready.go:86] duration metric: took 4.7299ms for pod "kube-proxy-7stn4" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:05:47.810344 1063012 pod_ready.go:83] waiting for pod "kube-scheduler-functional-333688" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:05:47.992302 1063012 pod_ready.go:94] pod "kube-scheduler-functional-333688" is "Ready"
	I1123 08:05:47.992325 1063012 pod_ready.go:86] duration metric: took 181.969762ms for pod "kube-scheduler-functional-333688" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:05:47.992336 1063012 pod_ready.go:40] duration metric: took 12.222358269s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:05:48.046080 1063012 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:05:48.049661 1063012 out.go:179] * Done! kubectl is now configured to use "functional-333688" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:06:28 functional-333688 crio[3535]: time="2025-11-23T08:06:28.414517982Z" level=info msg="Creating container: default/sp-pod/myfrontend" id=cb4ec755-6036-424f-9607-8a632f3b1cf9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:06:28 functional-333688 crio[3535]: time="2025-11-23T08:06:28.414764768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:06:28 functional-333688 crio[3535]: time="2025-11-23T08:06:28.420795841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:06:28 functional-333688 crio[3535]: time="2025-11-23T08:06:28.421343985Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:06:28 functional-333688 crio[3535]: time="2025-11-23T08:06:28.435876517Z" level=info msg="Created container 4a31f0f1091d96dea41c821703754093018bd43089ff8a6a566be038054deba4: default/sp-pod/myfrontend" id=cb4ec755-6036-424f-9607-8a632f3b1cf9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:06:28 functional-333688 crio[3535]: time="2025-11-23T08:06:28.436652069Z" level=info msg="Starting container: 4a31f0f1091d96dea41c821703754093018bd43089ff8a6a566be038054deba4" id=92add325-ad84-44e7-999e-66ec2a96c6df name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:06:28 functional-333688 crio[3535]: time="2025-11-23T08:06:28.438489702Z" level=info msg="Started container" PID=6001 containerID=4a31f0f1091d96dea41c821703754093018bd43089ff8a6a566be038054deba4 description=default/sp-pod/myfrontend id=92add325-ad84-44e7-999e-66ec2a96c6df name=/runtime.v1.RuntimeService/StartContainer sandboxID=7126128823e62eaa4adc47ef56c7f91435cb66e12181d864c436010437266741
	Nov 23 08:06:34 functional-333688 crio[3535]: time="2025-11-23T08:06:34.887653253Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3d5f21c6-b3d7-46b9-8db9-dc9b57e5ebe5 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:06:36 functional-333688 crio[3535]: time="2025-11-23T08:06:36.28504802Z" level=info msg="Running pod sandbox: default/hello-node-75c85bcc94-l2cjr/POD" id=501cefc2-cf0b-4447-b636-65e0436d32ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:06:36 functional-333688 crio[3535]: time="2025-11-23T08:06:36.285109893Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:06:36 functional-333688 crio[3535]: time="2025-11-23T08:06:36.291762068Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-l2cjr Namespace:default ID:e6ba9c9cca3d826860a44f297168a70cc913df29d373b67b3049f60d418e3968 UID:b0961a45-5698-4691-8f6b-6e4393f30baa NetNS:/var/run/netns/c54c7533-3728-42b7-911b-5abf87fb8d66 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000319c08}] Aliases:map[]}"
	Nov 23 08:06:36 functional-333688 crio[3535]: time="2025-11-23T08:06:36.291800172Z" level=info msg="Adding pod default_hello-node-75c85bcc94-l2cjr to CNI network \"kindnet\" (type=ptp)"
	Nov 23 08:06:36 functional-333688 crio[3535]: time="2025-11-23T08:06:36.303120562Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-l2cjr Namespace:default ID:e6ba9c9cca3d826860a44f297168a70cc913df29d373b67b3049f60d418e3968 UID:b0961a45-5698-4691-8f6b-6e4393f30baa NetNS:/var/run/netns/c54c7533-3728-42b7-911b-5abf87fb8d66 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000319c08}] Aliases:map[]}"
	Nov 23 08:06:36 functional-333688 crio[3535]: time="2025-11-23T08:06:36.303681768Z" level=info msg="Checking pod default_hello-node-75c85bcc94-l2cjr for CNI network kindnet (type=ptp)"
	Nov 23 08:06:36 functional-333688 crio[3535]: time="2025-11-23T08:06:36.30655655Z" level=info msg="Ran pod sandbox e6ba9c9cca3d826860a44f297168a70cc913df29d373b67b3049f60d418e3968 with infra container: default/hello-node-75c85bcc94-l2cjr/POD" id=501cefc2-cf0b-4447-b636-65e0436d32ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:06:36 functional-333688 crio[3535]: time="2025-11-23T08:06:36.309563832Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=abbd4c10-0104-48b2-b870-69b9e16a1de8 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:06:51 functional-333688 crio[3535]: time="2025-11-23T08:06:51.889191203Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4dd83a3a-88f7-439a-b99f-1c30478a00e9 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:07:00 functional-333688 crio[3535]: time="2025-11-23T08:07:00.888088626Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=025df62c-bbf7-41f6-8180-d81e3701f093 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:07:20 functional-333688 crio[3535]: time="2025-11-23T08:07:20.888122497Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=92040653-0040-4639-aeea-78b2844e41d0 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:07:42 functional-333688 crio[3535]: time="2025-11-23T08:07:42.887171876Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=734c9263-f142-431d-b01b-a61350ae2800 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:08:08 functional-333688 crio[3535]: time="2025-11-23T08:08:08.888043019Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6a57e841-d516-47ba-98c4-57fb720da125 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:09:05 functional-333688 crio[3535]: time="2025-11-23T08:09:05.889127911Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=581ccf48-8931-4a68-b060-fea7804386ae name=/runtime.v1.ImageService/PullImage
	Nov 23 08:09:38 functional-333688 crio[3535]: time="2025-11-23T08:09:38.888027642Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=91b740d9-daa8-4daa-9b8c-46290adb2689 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:11:48 functional-333688 crio[3535]: time="2025-11-23T08:11:48.887754849Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c24797f9-4ac1-40b6-be1b-fe33509a542c name=/runtime.v1.ImageService/PullImage
	Nov 23 08:12:24 functional-333688 crio[3535]: time="2025-11-23T08:12:24.887810999Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=249ea579-8bc7-4cb6-93b5-7db1df6994f9 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4a31f0f1091d9       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712       9 minutes ago       Running             myfrontend                0                   7126128823e62       sp-pod                                      default
	4c8b651151cb6       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   10 minutes ago      Exited              mount-munger              0                   bd26529ecf670       busybox-mount                               default
	7a2d38b270b43       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90       10 minutes ago      Running             nginx                     0                   1b9b7ec663d66       nginx-svc                                   default
	4748a70449d74       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      10 minutes ago      Running             storage-provisioner       3                   8b3b5afa68244       storage-provisioner                         kube-system
	883ec80faeb03       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      10 minutes ago      Running             kindnet-cni               2                   6ec63527003a0       kindnet-zvxq4                               kube-system
	43c7ff471b511       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      10 minutes ago      Running             kube-proxy                2                   dfc8001af847c       kube-proxy-7stn4                            kube-system
	100f2a98b8f19       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      10 minutes ago      Running             coredns                   2                   073915dbd2570       coredns-66bc5c9577-9dwq5                    kube-system
	6d65cd91fb46e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      10 minutes ago      Running             kube-apiserver            0                   3f17256eda281       kube-apiserver-functional-333688            kube-system
	7b294ee86a3ab       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      10 minutes ago      Running             kube-scheduler            2                   f8d539baba1c7       kube-scheduler-functional-333688            kube-system
	1109fe1b0d71e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      10 minutes ago      Running             kube-controller-manager   2                   c2826b1e1c9b7       kube-controller-manager-functional-333688   kube-system
	d4ce2379d8589       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      10 minutes ago      Running             etcd                      2                   a8c555bb0a807       etcd-functional-333688                      kube-system
	a54b7fd3e345c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      11 minutes ago      Exited              storage-provisioner       2                   8b3b5afa68244       storage-provisioner                         kube-system
	086f4eb776c63       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      11 minutes ago      Exited              kube-proxy                1                   dfc8001af847c       kube-proxy-7stn4                            kube-system
	99cedab36358c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      11 minutes ago      Exited              kube-scheduler            1                   f8d539baba1c7       kube-scheduler-functional-333688            kube-system
	d78d1e1775e28       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      11 minutes ago      Exited              coredns                   1                   073915dbd2570       coredns-66bc5c9577-9dwq5                    kube-system
	8a676a87e55ff       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      11 minutes ago      Exited              kindnet-cni               1                   6ec63527003a0       kindnet-zvxq4                               kube-system
	c7f281a298cfe       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      11 minutes ago      Exited              kube-controller-manager   1                   c2826b1e1c9b7       kube-controller-manager-functional-333688   kube-system
	65b2c2e681afe       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      11 minutes ago      Exited              etcd                      1                   a8c555bb0a807       etcd-functional-333688                      kube-system
	
	
	==> coredns [100f2a98b8f198bc5da474bdcd1aec93422fe5d0e8011f4281269106aad1ab14] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58935 - 62028 "HINFO IN 4303484004516387875.9053964438786807037. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004036299s
	
	
	==> coredns [d78d1e1775e287b4f02482d6951781f4cead22a00b484276a9f029e216321032] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53176 - 61111 "HINFO IN 4750201297868451342.1215058759959204496. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033142897s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-333688
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-333688
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=functional-333688
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_03_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:03:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-333688
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:16:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:15:35 +0000   Sun, 23 Nov 2025 08:03:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:15:35 +0000   Sun, 23 Nov 2025 08:03:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:15:35 +0000   Sun, 23 Nov 2025 08:03:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:15:35 +0000   Sun, 23 Nov 2025 08:04:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-333688
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                44ae431b-6ff2-4e25-8ba6-430e8a1f929d
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-l2cjr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-cmzr4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-9dwq5                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-333688                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-zvxq4                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-333688             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-333688    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-7stn4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-333688             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-333688 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-333688 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-333688 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-333688 event: Registered Node functional-333688 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-333688 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-333688 event: Registered Node functional-333688 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-333688 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-333688 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-333688 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-333688 event: Registered Node functional-333688 in Controller
	
	
	==> dmesg <==
	[Nov23 07:12] overlayfs: idmapped layers are currently not supported
	[Nov23 07:13] overlayfs: idmapped layers are currently not supported
	[Nov23 07:14] overlayfs: idmapped layers are currently not supported
	[ +16.709544] overlayfs: idmapped layers are currently not supported
	[ +39.052436] overlayfs: idmapped layers are currently not supported
	[Nov23 07:16] overlayfs: idmapped layers are currently not supported
	[Nov23 07:17] overlayfs: idmapped layers are currently not supported
	[Nov23 07:18] overlayfs: idmapped layers are currently not supported
	[ +42.777291] overlayfs: idmapped layers are currently not supported
	[Nov23 07:19] overlayfs: idmapped layers are currently not supported
	[Nov23 07:20] overlayfs: idmapped layers are currently not supported
	[Nov23 07:21] overlayfs: idmapped layers are currently not supported
	[ +25.538176] overlayfs: idmapped layers are currently not supported
	[Nov23 07:22] overlayfs: idmapped layers are currently not supported
	[ +17.484475] overlayfs: idmapped layers are currently not supported
	[Nov23 07:23] overlayfs: idmapped layers are currently not supported
	[Nov23 07:24] overlayfs: idmapped layers are currently not supported
	[Nov23 07:25] overlayfs: idmapped layers are currently not supported
	[Nov23 07:26] overlayfs: idmapped layers are currently not supported
	[Nov23 07:27] overlayfs: idmapped layers are currently not supported
	[ +38.121959] overlayfs: idmapped layers are currently not supported
	[Nov23 07:55] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 07:56] overlayfs: idmapped layers are currently not supported
	[Nov23 08:02] overlayfs: idmapped layers are currently not supported
	[Nov23 08:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [65b2c2e681afe1ac92cce8ecce32c490932ce5aa5de460a367af2638c12238a3] <==
	{"level":"warn","ts":"2025-11-23T08:04:44.570083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:04:44.585907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:04:44.609133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:04:44.630182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:04:44.645551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:04:44.660226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:04:44.734682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55610","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:05:10.213008Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-23T08:05:10.213074Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-333688","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-23T08:05:10.213194Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T08:05:10.487023Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T08:05:10.487122Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:05:10.487147Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-23T08:05:10.487213Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-23T08:05:10.487310Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-23T08:05:10.487361Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T08:05:10.487381Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T08:05:10.487390Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-23T08:05:10.487301Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T08:05:10.487404Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T08:05:10.487410Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:05:10.491253Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-23T08:05:10.491334Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:05:10.491375Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-23T08:05:10.491427Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-333688","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [d4ce2379d8589a4654f40cef824cc1167cc0e0e9fbef10fb4416d6ce6412dfe4] <==
	{"level":"warn","ts":"2025-11-23T08:05:30.914956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:30.947357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.020701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.025723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.067250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.096139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.134786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.164106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.174422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.191258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.219505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.253715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.274892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.299544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.317460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.351136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.379972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.441089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.465001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.502158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.517032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:05:31.610850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56750","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:15:30.029087Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1096}
	{"level":"info","ts":"2025-11-23T08:15:30.053514Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1096,"took":"24.059549ms","hash":2927791257,"current-db-size-bytes":3309568,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1392640,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-11-23T08:15:30.053584Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2927791257,"revision":1096,"compact-revision":-1}
	
	
	==> kernel <==
	 08:16:22 up  8:58,  0 user,  load average: 0.23, 0.30, 0.62
	Linux functional-333688 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [883ec80faeb038ab24223583e7b734946edb5035f3215f2788016b363f31d20a] <==
	I1123 08:14:13.537973       1 main.go:301] handling current node
	I1123 08:14:23.543540       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:14:23.543571       1 main.go:301] handling current node
	I1123 08:14:33.544164       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:14:33.544276       1 main.go:301] handling current node
	I1123 08:14:43.536673       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:14:43.536756       1 main.go:301] handling current node
	I1123 08:14:53.543352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:14:53.543471       1 main.go:301] handling current node
	I1123 08:15:03.544455       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:15:03.544583       1 main.go:301] handling current node
	I1123 08:15:13.539253       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:15:13.539381       1 main.go:301] handling current node
	I1123 08:15:23.539561       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:15:23.539599       1 main.go:301] handling current node
	I1123 08:15:33.536802       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:15:33.536909       1 main.go:301] handling current node
	I1123 08:15:43.536891       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:15:43.536927       1 main.go:301] handling current node
	I1123 08:15:53.541503       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:15:53.541537       1 main.go:301] handling current node
	I1123 08:16:03.541952       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:16:03.541987       1 main.go:301] handling current node
	I1123 08:16:13.539276       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:16:13.539316       1 main.go:301] handling current node
	
	
	==> kindnet [8a676a87e55ff5ce48f84d72550bf3e49b9a3a1f680f2de7f0e6dd7ad2258061] <==
	I1123 08:04:41.286110       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:04:41.286525       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1123 08:04:41.298514       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:04:41.298614       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:04:41.298655       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:04:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:04:41.491724       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:04:41.491754       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:04:41.491764       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:04:41.491876       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:04:45.685335       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:04:45.685385       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:04:45.685491       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 08:04:45.685435       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 08:04:46.792177       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:04:46.792210       1 metrics.go:72] Registering metrics
	I1123 08:04:46.792279       1 controller.go:711] "Syncing nftables rules"
	I1123 08:04:51.487071       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:04:51.487130       1 main.go:301] handling current node
	I1123 08:05:01.484682       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:05:01.484713       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6d65cd91fb46ebd96b242822211ab526d5fdad0fb60ab4398d5dfaf334cdbba8] <==
	I1123 08:05:32.589313       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:05:32.597278       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:05:32.608202       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 08:05:32.608346       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:05:32.613656       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 08:05:32.613753       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 08:05:32.613820       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:05:32.621611       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1123 08:05:32.622673       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:05:32.630365       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:05:32.881767       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:05:33.370775       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:05:34.374407       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:05:34.511698       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:05:34.629533       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:05:34.648053       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:05:50.727454       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:05:51.303320       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.50.80"}
	I1123 08:05:51.322377       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:05:56.536967       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.187.176"}
	I1123 08:06:19.774747       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:06:19.905520       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.127.77"}
	E1123 08:06:35.849919       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51526: use of closed network connection
	I1123 08:06:36.063998       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.247.135"}
	I1123 08:15:32.505242       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1109fe1b0d71ecc591b7f37f9adf0f11c41a6e748a46e21d4accb630c8936ef6] <==
	I1123 08:05:35.899827       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 08:05:35.899483       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:05:35.899459       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:05:35.907159       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:05:35.910258       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:05:35.910283       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:05:35.915456       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:05:35.915548       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:05:35.915598       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:05:35.914850       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:05:35.910301       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:05:35.914901       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:05:35.920355       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:05:35.923148       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:05:35.927079       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:05:35.927219       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:05:35.927249       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:05:35.927254       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:05:35.927260       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:05:35.936341       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:05:35.939646       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:05:35.941968       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:05:35.941989       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:05:35.942008       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:05:35.951293       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [c7f281a298cfe323612de79580ca505f8206cea7c9227751192d3966c40068ce] <==
	I1123 08:04:49.039566       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 08:04:49.042820       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:04:49.044173       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:04:49.047247       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:04:49.047357       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:04:49.047428       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:04:49.047471       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:04:49.047611       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:04:49.052772       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:04:49.055038       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:04:49.060290       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:04:49.062541       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:04:49.080675       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:04:49.086440       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:04:49.086543       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:04:49.086473       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:04:49.086460       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:04:49.086488       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:04:49.086509       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:04:49.086498       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:04:49.088156       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:04:49.090499       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:04:49.090551       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:04:49.090565       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:04:49.093313       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [086f4eb776c634c1563bd0b838aa3be82785d66ece064f583b9dbc3e77d3b903] <==
	I1123 08:04:41.986935       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:04:44.720939       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1123 08:04:45.679496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-333688\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1123 08:04:47.022450       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:04:47.022573       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 08:04:47.022723       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:04:47.049970       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:04:47.050092       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:04:47.054337       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:04:47.054690       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:04:47.054894       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:04:47.056349       1 config.go:200] "Starting service config controller"
	I1123 08:04:47.056369       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:04:47.056385       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:04:47.056389       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:04:47.056411       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:04:47.056416       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:04:47.057049       1 config.go:309] "Starting node config controller"
	I1123 08:04:47.057255       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:04:47.057295       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:04:47.157038       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:04:47.157047       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:04:47.157066       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [43c7ff471b5111a5c89283aa4cf8804b80ed0749735487e9d1b4fe221ac1f8a7] <==
	I1123 08:05:33.347286       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:05:33.460393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:05:33.562685       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:05:33.562805       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 08:05:33.562930       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:05:33.641451       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:05:33.641512       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:05:33.645566       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:05:33.645856       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:05:33.645880       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:05:33.646973       1 config.go:200] "Starting service config controller"
	I1123 08:05:33.646992       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:05:33.650248       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:05:33.650267       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:05:33.650284       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:05:33.650288       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:05:33.650984       1 config.go:309] "Starting node config controller"
	I1123 08:05:33.651002       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:05:33.651012       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:05:33.747307       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:05:33.750566       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:05:33.750573       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7b294ee86a3ab9007a1129bbf83d4925078844c8517729975c3366091c718fa2] <==
	I1123 08:05:31.491897       1 serving.go:386] Generated self-signed cert in-memory
	W1123 08:05:32.545370       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 08:05:32.545481       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:05:32.545516       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 08:05:32.545595       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 08:05:32.607103       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:05:32.607140       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:05:32.617639       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:05:32.618039       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:05:32.618095       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:05:32.618139       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:05:32.718763       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [99cedab36358c443eb233291e3adc03af663b7c4ee2ab0aae0911b2f4cda61ee] <==
	I1123 08:04:42.693566       1 serving.go:386] Generated self-signed cert in-memory
	W1123 08:04:45.629928       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 08:04:45.629956       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:04:45.629966       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 08:04:45.629974       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 08:04:45.743734       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:04:45.743763       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:04:45.745707       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:04:45.745755       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:04:45.751537       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:04:45.751496       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:04:45.847525       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:05:10.216165       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1123 08:05:10.216192       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1123 08:05:10.216214       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1123 08:05:10.216236       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:05:10.216443       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1123 08:05:10.216459       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 23 08:13:43 functional-333688 kubelet[3854]: E1123 08:13:43.888039    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cmzr4" podUID="087dfdf0-a046-4156-94e2-3901f709787e"
	Nov 23 08:13:49 functional-333688 kubelet[3854]: E1123 08:13:49.888328    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l2cjr" podUID="b0961a45-5698-4691-8f6b-6e4393f30baa"
	Nov 23 08:13:54 functional-333688 kubelet[3854]: E1123 08:13:54.887480    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cmzr4" podUID="087dfdf0-a046-4156-94e2-3901f709787e"
	Nov 23 08:14:01 functional-333688 kubelet[3854]: E1123 08:14:01.887662    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l2cjr" podUID="b0961a45-5698-4691-8f6b-6e4393f30baa"
	Nov 23 08:14:06 functional-333688 kubelet[3854]: E1123 08:14:06.887322    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cmzr4" podUID="087dfdf0-a046-4156-94e2-3901f709787e"
	Nov 23 08:14:12 functional-333688 kubelet[3854]: E1123 08:14:12.887286    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l2cjr" podUID="b0961a45-5698-4691-8f6b-6e4393f30baa"
	Nov 23 08:14:18 functional-333688 kubelet[3854]: E1123 08:14:18.887370    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cmzr4" podUID="087dfdf0-a046-4156-94e2-3901f709787e"
	Nov 23 08:14:23 functional-333688 kubelet[3854]: E1123 08:14:23.887855    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l2cjr" podUID="b0961a45-5698-4691-8f6b-6e4393f30baa"
	Nov 23 08:14:33 functional-333688 kubelet[3854]: E1123 08:14:33.887165    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cmzr4" podUID="087dfdf0-a046-4156-94e2-3901f709787e"
	Nov 23 08:14:34 functional-333688 kubelet[3854]: E1123 08:14:34.887323    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l2cjr" podUID="b0961a45-5698-4691-8f6b-6e4393f30baa"
	Nov 23 08:14:45 functional-333688 kubelet[3854]: E1123 08:14:45.887457    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l2cjr" podUID="b0961a45-5698-4691-8f6b-6e4393f30baa"
	Nov 23 08:14:47 functional-333688 kubelet[3854]: E1123 08:14:47.887502    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cmzr4" podUID="087dfdf0-a046-4156-94e2-3901f709787e"
	Nov 23 08:15:00 functional-333688 kubelet[3854]: E1123 08:15:00.887385    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l2cjr" podUID="b0961a45-5698-4691-8f6b-6e4393f30baa"
	Nov 23 08:15:01 functional-333688 kubelet[3854]: E1123 08:15:01.888493    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cmzr4" podUID="087dfdf0-a046-4156-94e2-3901f709787e"
	Nov 23 08:15:14 functional-333688 kubelet[3854]: E1123 08:15:14.887318    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l2cjr" podUID="b0961a45-5698-4691-8f6b-6e4393f30baa"
	Nov 23 08:15:15 functional-333688 kubelet[3854]: E1123 08:15:15.888895    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cmzr4" podUID="087dfdf0-a046-4156-94e2-3901f709787e"
	Nov 23 08:15:26 functional-333688 kubelet[3854]: E1123 08:15:26.887404    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cmzr4" podUID="087dfdf0-a046-4156-94e2-3901f709787e"
	Nov 23 08:15:28 functional-333688 kubelet[3854]: E1123 08:15:28.887593    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l2cjr" podUID="b0961a45-5698-4691-8f6b-6e4393f30baa"
	Nov 23 08:15:40 functional-333688 kubelet[3854]: E1123 08:15:40.887725    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l2cjr" podUID="b0961a45-5698-4691-8f6b-6e4393f30baa"
	Nov 23 08:15:41 functional-333688 kubelet[3854]: E1123 08:15:41.887950    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cmzr4" podUID="087dfdf0-a046-4156-94e2-3901f709787e"
	Nov 23 08:15:51 functional-333688 kubelet[3854]: E1123 08:15:51.888150    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l2cjr" podUID="b0961a45-5698-4691-8f6b-6e4393f30baa"
	Nov 23 08:15:55 functional-333688 kubelet[3854]: E1123 08:15:55.888910    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cmzr4" podUID="087dfdf0-a046-4156-94e2-3901f709787e"
	Nov 23 08:16:03 functional-333688 kubelet[3854]: E1123 08:16:03.888853    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l2cjr" podUID="b0961a45-5698-4691-8f6b-6e4393f30baa"
	Nov 23 08:16:10 functional-333688 kubelet[3854]: E1123 08:16:10.886830    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-cmzr4" podUID="087dfdf0-a046-4156-94e2-3901f709787e"
	Nov 23 08:16:17 functional-333688 kubelet[3854]: E1123 08:16:17.889241    3854 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-l2cjr" podUID="b0961a45-5698-4691-8f6b-6e4393f30baa"
	
	
	==> storage-provisioner [4748a70449d742ff6931ac5232867f4c4b5fcfb047fb1c36d29d3e66986ca540] <==
	W1123 08:15:57.484605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:59.487278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:15:59.491883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:01.494721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:01.501591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:03.505277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:03.510108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:05.513642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:05.518223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:07.521375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:07.525745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:09.528518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:09.535238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:11.537899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:11.542355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:13.545001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:13.549501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:15.553008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:15.557366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:17.561070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:17.565035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:19.568266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:19.575473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:21.579792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:16:21.587459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a54b7fd3e345cc2aea80a3732aee708cbf6030f853c0bfcb12aaa320fdb049fa] <==
	I1123 08:04:58.716731       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:04:58.729577       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:04:58.729628       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:04:58.731696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:05:02.187965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:05:06.448786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:05:10.047730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-333688 -n functional-333688
helpers_test.go:269: (dbg) Run:  kubectl --context functional-333688 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-l2cjr hello-node-connect-7d85dfc575-cmzr4
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-333688 describe pod busybox-mount hello-node-75c85bcc94-l2cjr hello-node-connect-7d85dfc575-cmzr4
helpers_test.go:290: (dbg) kubectl --context functional-333688 describe pod busybox-mount hello-node-75c85bcc94-l2cjr hello-node-connect-7d85dfc575-cmzr4:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-333688/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 08:06:08 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://4c8b651151cb6f654b993470261d4cecf262005609be47196d5233f453e78d5d
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 23 Nov 2025 08:06:11 +0000
	      Finished:     Sun, 23 Nov 2025 08:06:11 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l4xq8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-l4xq8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-333688
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.973s (1.973s including waiting). Image size: 3774172 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-l2cjr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-333688/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 08:06:35 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xspmc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xspmc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-l2cjr to functional-333688
	  Normal   Pulling    6m45s (x5 over 9m47s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m45s (x5 over 9m47s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m45s (x5 over 9m47s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m45s (x20 over 9m46s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m34s (x21 over 9m46s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-cmzr4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-333688/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 08:06:19 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvfnx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vvfnx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cmzr4 to functional-333688
	  Normal   Pulling    7m18s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m18s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m18s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x42 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x42 over 10m)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image load --daemon kicbase/echo-server:functional-333688 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-333688" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image load --daemon kicbase/echo-server:functional-333688 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-333688" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-333688
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image load --daemon kicbase/echo-server:functional-333688 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-333688" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image save kicbase/echo-server:functional-333688 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1123 08:06:02.192902 1066085 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:06:02.193734 1066085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:06:02.193750 1066085 out.go:374] Setting ErrFile to fd 2...
	I1123 08:06:02.193755 1066085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:06:02.194024 1066085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:06:02.194666 1066085 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:06:02.194792 1066085 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:06:02.195398 1066085 cli_runner.go:164] Run: docker container inspect functional-333688 --format={{.State.Status}}
	I1123 08:06:02.217410 1066085 ssh_runner.go:195] Run: systemctl --version
	I1123 08:06:02.217475 1066085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
	I1123 08:06:02.235602 1066085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34237 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/functional-333688/id_rsa Username:docker}
	I1123 08:06:02.353831 1066085 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1123 08:06:02.353949 1066085 cache_images.go:255] Failed to load cached images for "functional-333688": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1123 08:06:02.354018 1066085 cache_images.go:267] failed pushing to: functional-333688

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-333688
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image save --daemon kicbase/echo-server:functional-333688 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-333688
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-333688: exit status 1 (18.788098ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-333688

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-333688

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-333688 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-333688 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-l2cjr" [b0961a45-5698-4691-8f6b-6e4393f30baa] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1123 08:08:50.642367 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:09:18.346045 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:13:50.642459 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-333688 -n functional-333688
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-23 08:16:36.402377369 +0000 UTC m=+1250.101138114
functional_test.go:1460: (dbg) Run:  kubectl --context functional-333688 describe po hello-node-75c85bcc94-l2cjr -n default
functional_test.go:1460: (dbg) kubectl --context functional-333688 describe po hello-node-75c85bcc94-l2cjr -n default:
Name:             hello-node-75c85bcc94-l2cjr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-333688/192.168.49.2
Start Time:       Sun, 23 Nov 2025 08:06:35 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xspmc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xspmc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-l2cjr to functional-333688
Normal   Pulling    6m58s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m58s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m47s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-333688 logs hello-node-75c85bcc94-l2cjr -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-333688 logs hello-node-75c85bcc94-l2cjr -n default: exit status 1 (88.735018ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-l2cjr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-333688 logs hello-node-75c85bcc94-l2cjr -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-333688 service --namespace=default --https --url hello-node: exit status 115 (480.48558ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30642
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-333688 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-333688 service hello-node --url --format={{.IP}}: exit status 115 (544.054493ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-333688 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-333688 service hello-node --url: exit status 115 (399.542836ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30642
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-333688 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30642
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-106359 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-106359 --output=json --user=testUser: exit status 80 (1.583690287s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7d76cca2-394f-4a81-b505-16c1182b1a13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-106359 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"d54d8c32-7648-432a-9e9b-c820d61a3d38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-23T08:29:03Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"970ec6be-3e4d-4245-9414-865f80a489df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-106359 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.31s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-106359 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-106359 --output=json --user=testUser: exit status 80 (2.310216444s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"13250767-b338-4dc7-b68a-5898d3dc213b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-106359 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"265623f6-f14c-4aed-9b20-113f862e3045","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-23T08:29:05Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"83aab337-e12b-432e-9d12-4b5219ec1a51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-106359 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.31s)

                                                
                                    
x
+
TestPause/serial/Pause (6.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-041000 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-041000 --alsologtostderr -v=5: exit status 80 (1.867040059s)

                                                
                                                
-- stdout --
	* Pausing node pause-041000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:50:29.423653 1204417 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:50:29.424424 1204417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:50:29.424440 1204417 out.go:374] Setting ErrFile to fd 2...
	I1123 08:50:29.424447 1204417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:50:29.424740 1204417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:50:29.425005 1204417 out.go:368] Setting JSON to false
	I1123 08:50:29.425031 1204417 mustload.go:66] Loading cluster: pause-041000
	I1123 08:50:29.425490 1204417 config.go:182] Loaded profile config "pause-041000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:50:29.425966 1204417 cli_runner.go:164] Run: docker container inspect pause-041000 --format={{.State.Status}}
	I1123 08:50:29.443655 1204417 host.go:66] Checking if "pause-041000" exists ...
	I1123 08:50:29.444047 1204417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:50:29.515156 1204417 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 08:50:29.499398149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:50:29.515991 1204417 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-041000 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 08:50:29.519114 1204417 out.go:179] * Pausing node pause-041000 ... 
	I1123 08:50:29.522913 1204417 host.go:66] Checking if "pause-041000" exists ...
	I1123 08:50:29.523362 1204417 ssh_runner.go:195] Run: systemctl --version
	I1123 08:50:29.523412 1204417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:29.541305 1204417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34487 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/pause-041000/id_rsa Username:docker}
	I1123 08:50:29.649825 1204417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:50:29.664280 1204417 pause.go:52] kubelet running: true
	I1123 08:50:29.664350 1204417 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:50:29.931210 1204417 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:50:29.931315 1204417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:50:30.074060 1204417 cri.go:89] found id: "72f2b8001c28bfca63c62972110a9ad36c4820ca3ef16e526e0a844ca887492c"
	I1123 08:50:30.074085 1204417 cri.go:89] found id: "8930f41e87c1a562cfa6ee0b54b62abcdfa7cb763db50ee729cccd3950c35f8d"
	I1123 08:50:30.074092 1204417 cri.go:89] found id: "5706a7e3c437e929a3040132d53857e200f691e33708663ad34cf3739ffa9fa5"
	I1123 08:50:30.074097 1204417 cri.go:89] found id: "3ad7b30765627018f51db658fa83ba644bdf45e1aba371d0fb38f45a91374bcd"
	I1123 08:50:30.074102 1204417 cri.go:89] found id: "0f27e4bf55b6041def1c3334330ebd093381737fe017224f64088735e7725ee3"
	I1123 08:50:30.074106 1204417 cri.go:89] found id: "47c71e2fb9574a058e5a5920e44ae120194a58b368bf86420a497c977179f436"
	I1123 08:50:30.074110 1204417 cri.go:89] found id: "7f4c49daf75ac06cb4aa4e7ca85ebdd1cd16f76be10a2e5f73b954fc7c75b042"
	I1123 08:50:30.074113 1204417 cri.go:89] found id: "45dc8732cc188eafe78085325272d3d984a7f717166cd0187bd52e340fe5512f"
	I1123 08:50:30.074118 1204417 cri.go:89] found id: "9b1e6a2484dc967bcde959062922c706ed1adf1100c5f4604bf3898820f2b5ed"
	I1123 08:50:30.074133 1204417 cri.go:89] found id: "014caef83785d9e215263867e2ad026cec906105ca6cef8f64db232c473788dc"
	I1123 08:50:30.074137 1204417 cri.go:89] found id: "7868a019e831a2a996406b4d89aeecb6d8d06950ed992d7b8da380cb642f8763"
	I1123 08:50:30.074141 1204417 cri.go:89] found id: "0f807846d8d95db888a7aa3b3464682121926c4e8c8de2fb4a034403188441bc"
	I1123 08:50:30.074145 1204417 cri.go:89] found id: "08bc69e75e392dc8dcaa5e734aca0453b6fa3505a2d07872309a1e92aa887ca2"
	I1123 08:50:30.074149 1204417 cri.go:89] found id: "daa63f34218f6ccf5f1c984a5efedf997465def664bd74e9558bce3ec2095793"
	I1123 08:50:30.074153 1204417 cri.go:89] found id: ""
	I1123 08:50:30.074216 1204417 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:50:30.087591 1204417 retry.go:31] will retry after 171.164786ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:50:30Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:50:30.259632 1204417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:50:30.277603 1204417 pause.go:52] kubelet running: false
	I1123 08:50:30.277734 1204417 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:50:30.478483 1204417 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:50:30.478658 1204417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:50:30.627581 1204417 cri.go:89] found id: "72f2b8001c28bfca63c62972110a9ad36c4820ca3ef16e526e0a844ca887492c"
	I1123 08:50:30.627662 1204417 cri.go:89] found id: "8930f41e87c1a562cfa6ee0b54b62abcdfa7cb763db50ee729cccd3950c35f8d"
	I1123 08:50:30.627681 1204417 cri.go:89] found id: "5706a7e3c437e929a3040132d53857e200f691e33708663ad34cf3739ffa9fa5"
	I1123 08:50:30.627699 1204417 cri.go:89] found id: "3ad7b30765627018f51db658fa83ba644bdf45e1aba371d0fb38f45a91374bcd"
	I1123 08:50:30.627732 1204417 cri.go:89] found id: "0f27e4bf55b6041def1c3334330ebd093381737fe017224f64088735e7725ee3"
	I1123 08:50:30.627755 1204417 cri.go:89] found id: "47c71e2fb9574a058e5a5920e44ae120194a58b368bf86420a497c977179f436"
	I1123 08:50:30.627773 1204417 cri.go:89] found id: "7f4c49daf75ac06cb4aa4e7ca85ebdd1cd16f76be10a2e5f73b954fc7c75b042"
	I1123 08:50:30.627790 1204417 cri.go:89] found id: "45dc8732cc188eafe78085325272d3d984a7f717166cd0187bd52e340fe5512f"
	I1123 08:50:30.627821 1204417 cri.go:89] found id: "9b1e6a2484dc967bcde959062922c706ed1adf1100c5f4604bf3898820f2b5ed"
	I1123 08:50:30.627844 1204417 cri.go:89] found id: "014caef83785d9e215263867e2ad026cec906105ca6cef8f64db232c473788dc"
	I1123 08:50:30.627863 1204417 cri.go:89] found id: "7868a019e831a2a996406b4d89aeecb6d8d06950ed992d7b8da380cb642f8763"
	I1123 08:50:30.627882 1204417 cri.go:89] found id: "0f807846d8d95db888a7aa3b3464682121926c4e8c8de2fb4a034403188441bc"
	I1123 08:50:30.627908 1204417 cri.go:89] found id: "08bc69e75e392dc8dcaa5e734aca0453b6fa3505a2d07872309a1e92aa887ca2"
	I1123 08:50:30.627928 1204417 cri.go:89] found id: "daa63f34218f6ccf5f1c984a5efedf997465def664bd74e9558bce3ec2095793"
	I1123 08:50:30.627945 1204417 cri.go:89] found id: ""
	I1123 08:50:30.628029 1204417 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:50:30.645084 1204417 retry.go:31] will retry after 233.069116ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:50:30Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:50:30.878383 1204417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:50:30.893323 1204417 pause.go:52] kubelet running: false
	I1123 08:50:30.893474 1204417 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:50:31.101657 1204417 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:50:31.101796 1204417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:50:31.199565 1204417 cri.go:89] found id: "72f2b8001c28bfca63c62972110a9ad36c4820ca3ef16e526e0a844ca887492c"
	I1123 08:50:31.199657 1204417 cri.go:89] found id: "8930f41e87c1a562cfa6ee0b54b62abcdfa7cb763db50ee729cccd3950c35f8d"
	I1123 08:50:31.199676 1204417 cri.go:89] found id: "5706a7e3c437e929a3040132d53857e200f691e33708663ad34cf3739ffa9fa5"
	I1123 08:50:31.199700 1204417 cri.go:89] found id: "3ad7b30765627018f51db658fa83ba644bdf45e1aba371d0fb38f45a91374bcd"
	I1123 08:50:31.199741 1204417 cri.go:89] found id: "0f27e4bf55b6041def1c3334330ebd093381737fe017224f64088735e7725ee3"
	I1123 08:50:31.199774 1204417 cri.go:89] found id: "47c71e2fb9574a058e5a5920e44ae120194a58b368bf86420a497c977179f436"
	I1123 08:50:31.199794 1204417 cri.go:89] found id: "7f4c49daf75ac06cb4aa4e7ca85ebdd1cd16f76be10a2e5f73b954fc7c75b042"
	I1123 08:50:31.199837 1204417 cri.go:89] found id: "45dc8732cc188eafe78085325272d3d984a7f717166cd0187bd52e340fe5512f"
	I1123 08:50:31.199862 1204417 cri.go:89] found id: "9b1e6a2484dc967bcde959062922c706ed1adf1100c5f4604bf3898820f2b5ed"
	I1123 08:50:31.199894 1204417 cri.go:89] found id: "014caef83785d9e215263867e2ad026cec906105ca6cef8f64db232c473788dc"
	I1123 08:50:31.199931 1204417 cri.go:89] found id: "7868a019e831a2a996406b4d89aeecb6d8d06950ed992d7b8da380cb642f8763"
	I1123 08:50:31.199948 1204417 cri.go:89] found id: "0f807846d8d95db888a7aa3b3464682121926c4e8c8de2fb4a034403188441bc"
	I1123 08:50:31.199975 1204417 cri.go:89] found id: "08bc69e75e392dc8dcaa5e734aca0453b6fa3505a2d07872309a1e92aa887ca2"
	I1123 08:50:31.199997 1204417 cri.go:89] found id: "daa63f34218f6ccf5f1c984a5efedf997465def664bd74e9558bce3ec2095793"
	I1123 08:50:31.200014 1204417 cri.go:89] found id: ""
	I1123 08:50:31.200107 1204417 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:50:31.217344 1204417 out.go:203] 
	W1123 08:50:31.220485 1204417 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:50:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:50:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:50:31.220719 1204417 out.go:285] * 
	* 
	W1123 08:50:31.230572 1204417 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:50:31.235740 1204417 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-041000 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-041000
helpers_test.go:243: (dbg) docker inspect pause-041000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6",
	        "Created": "2025-11-23T08:48:47.937244349Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1198486,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:48:48.017743348Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6/hostname",
	        "HostsPath": "/var/lib/docker/containers/716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6/hosts",
	        "LogPath": "/var/lib/docker/containers/716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6/716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6-json.log",
	        "Name": "/pause-041000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-041000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-041000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6",
	                "LowerDir": "/var/lib/docker/overlay2/d4286facfeda8026edb408d3a28ac48e26c98d7b6c4942287882529324f3c0af-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d4286facfeda8026edb408d3a28ac48e26c98d7b6c4942287882529324f3c0af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d4286facfeda8026edb408d3a28ac48e26c98d7b6c4942287882529324f3c0af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d4286facfeda8026edb408d3a28ac48e26c98d7b6c4942287882529324f3c0af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-041000",
	                "Source": "/var/lib/docker/volumes/pause-041000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-041000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-041000",
	                "name.minikube.sigs.k8s.io": "pause-041000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "397feafbbc91e0992a5b3a35dde420980d91399bd3e3d16903c0ebdcfe6a6800",
	            "SandboxKey": "/var/run/docker/netns/397feafbbc91",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34487"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34488"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34491"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34489"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34490"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-041000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:b9:c9:37:f1:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "67209089d6a2b97833ff4c62ee75196f13a8692732ca0bb2c519047a19d0d291",
	                    "EndpointID": "48c46a6f478adc038bf2f1178863b66c73263e18f0a8535868623c8f78e66069",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-041000",
	                        "716003843f34"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-041000 -n pause-041000
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-041000 -n pause-041000: exit status 2 (487.388971ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-041000 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-041000 logs -n 25: (1.353873841s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-293465 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p missing-upgrade-232904 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-232904    │ jenkins │ v1.32.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p NoKubernetes-293465 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p NoKubernetes-293465                                                                                                                   │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p NoKubernetes-293465 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ ssh     │ -p NoKubernetes-293465 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ stop    │ -p NoKubernetes-293465                                                                                                                   │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p missing-upgrade-232904 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-232904    │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p NoKubernetes-293465 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ ssh     │ -p NoKubernetes-293465 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │                     │
	│ delete  │ -p NoKubernetes-293465                                                                                                                   │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p kubernetes-upgrade-354226 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-354226 │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p missing-upgrade-232904                                                                                                                │ missing-upgrade-232904    │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ stop    │ -p kubernetes-upgrade-354226                                                                                                             │ kubernetes-upgrade-354226 │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p kubernetes-upgrade-354226 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-354226 │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │                     │
	│ start   │ -p stopped-upgrade-885580 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-885580    │ jenkins │ v1.32.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:47 UTC │
	│ stop    │ stopped-upgrade-885580 stop                                                                                                              │ stopped-upgrade-885580    │ jenkins │ v1.32.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ start   │ -p stopped-upgrade-885580 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-885580    │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ delete  │ -p stopped-upgrade-885580                                                                                                                │ stopped-upgrade-885580    │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ start   │ -p running-upgrade-462653 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-462653    │ jenkins │ v1.32.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:48 UTC │
	│ start   │ -p running-upgrade-462653 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-462653    │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:48 UTC │
	│ delete  │ -p running-upgrade-462653                                                                                                                │ running-upgrade-462653    │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:48 UTC │
	│ start   │ -p pause-041000 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-041000              │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:50 UTC │
	│ start   │ -p pause-041000 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-041000              │ jenkins │ v1.37.0 │ 23 Nov 25 08:50 UTC │ 23 Nov 25 08:50 UTC │
	│ pause   │ -p pause-041000 --alsologtostderr -v=5                                                                                                   │ pause-041000              │ jenkins │ v1.37.0 │ 23 Nov 25 08:50 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:50:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:50:02.600742 1202718 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:50:02.600857 1202718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:50:02.600869 1202718 out.go:374] Setting ErrFile to fd 2...
	I1123 08:50:02.600875 1202718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:50:02.601192 1202718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:50:02.601616 1202718 out.go:368] Setting JSON to false
	I1123 08:50:02.602778 1202718 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34348,"bootTime":1763853455,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:50:02.602856 1202718 start.go:143] virtualization:  
	I1123 08:50:02.607376 1202718 out.go:179] * [pause-041000] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:50:02.610390 1202718 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:50:02.610597 1202718 notify.go:221] Checking for updates...
	I1123 08:50:02.614910 1202718 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:50:02.618297 1202718 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:50:02.621181 1202718 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:50:02.624070 1202718 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:50:02.627015 1202718 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:50:02.630777 1202718 config.go:182] Loaded profile config "pause-041000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:50:02.631562 1202718 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:50:02.669196 1202718 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:50:02.669313 1202718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:50:02.741225 1202718 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 08:50:02.732177326 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:50:02.741335 1202718 docker.go:319] overlay module found
	I1123 08:50:02.744614 1202718 out.go:179] * Using the docker driver based on existing profile
	I1123 08:50:02.747553 1202718 start.go:309] selected driver: docker
	I1123 08:50:02.747583 1202718 start.go:927] validating driver "docker" against &{Name:pause-041000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-041000 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:50:02.747728 1202718 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:50:02.747829 1202718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:50:02.807770 1202718 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 08:50:02.798402351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:50:02.808156 1202718 cni.go:84] Creating CNI manager for ""
	I1123 08:50:02.808227 1202718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:50:02.808270 1202718 start.go:353] cluster config:
	{Name:pause-041000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-041000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:50:02.811593 1202718 out.go:179] * Starting "pause-041000" primary control-plane node in "pause-041000" cluster
	I1123 08:50:02.814450 1202718 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:50:02.817403 1202718 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:50:02.820243 1202718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:50:02.820292 1202718 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 08:50:02.820306 1202718 cache.go:65] Caching tarball of preloaded images
	I1123 08:50:02.820317 1202718 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:50:02.820388 1202718 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:50:02.820398 1202718 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:50:02.820525 1202718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/config.json ...
	I1123 08:50:02.839117 1202718 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:50:02.839139 1202718 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:50:02.839159 1202718 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:50:02.839254 1202718 start.go:360] acquireMachinesLock for pause-041000: {Name:mk607c5ec25c4c2ac4976ceaf5f6a6abdbe1e557 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:50:02.839330 1202718 start.go:364] duration metric: took 49.041µs to acquireMachinesLock for "pause-041000"
	I1123 08:50:02.839353 1202718 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:50:02.839361 1202718 fix.go:54] fixHost starting: 
	I1123 08:50:02.839623 1202718 cli_runner.go:164] Run: docker container inspect pause-041000 --format={{.State.Status}}
	I1123 08:50:02.856406 1202718 fix.go:112] recreateIfNeeded on pause-041000: state=Running err=<nil>
	W1123 08:50:02.856446 1202718 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:50:01.910312 1187534 logs.go:123] Gathering logs for kube-scheduler [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37] ...
	I1123 08:50:01.910350 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:01.976368 1187534 logs.go:123] Gathering logs for kube-controller-manager [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417] ...
	I1123 08:50:01.976409 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:04.510296 1187534 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:50:04.510848 1187534 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:50:04.510909 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:50:04.510981 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:50:04.547829 1187534 cri.go:89] found id: "39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:04.547858 1187534 cri.go:89] found id: ""
	I1123 08:50:04.547867 1187534 logs.go:282] 1 containers: [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387]
	I1123 08:50:04.547922 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:04.552079 1187534 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 08:50:04.552153 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:50:04.580253 1187534 cri.go:89] found id: ""
	I1123 08:50:04.580278 1187534 logs.go:282] 0 containers: []
	W1123 08:50:04.580287 1187534 logs.go:284] No container was found matching "etcd"
	I1123 08:50:04.580293 1187534 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 08:50:04.580351 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:50:04.605349 1187534 cri.go:89] found id: ""
	I1123 08:50:04.605375 1187534 logs.go:282] 0 containers: []
	W1123 08:50:04.605384 1187534 logs.go:284] No container was found matching "coredns"
	I1123 08:50:04.605395 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:50:04.605461 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:50:04.631584 1187534 cri.go:89] found id: "ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:04.631608 1187534 cri.go:89] found id: ""
	I1123 08:50:04.631617 1187534 logs.go:282] 1 containers: [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37]
	I1123 08:50:04.631676 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:04.635259 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:50:04.635333 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:50:04.662370 1187534 cri.go:89] found id: ""
	I1123 08:50:04.662438 1187534 logs.go:282] 0 containers: []
	W1123 08:50:04.662450 1187534 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:50:04.662457 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:50:04.662546 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:50:04.689211 1187534 cri.go:89] found id: "82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:04.689231 1187534 cri.go:89] found id: ""
	I1123 08:50:04.689238 1187534 logs.go:282] 1 containers: [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417]
	I1123 08:50:04.689295 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:04.695376 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 08:50:04.695444 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:50:04.725844 1187534 cri.go:89] found id: ""
	I1123 08:50:04.725910 1187534 logs.go:282] 0 containers: []
	W1123 08:50:04.725934 1187534 logs.go:284] No container was found matching "kindnet"
	I1123 08:50:04.725955 1187534 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:50:04.726026 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:50:04.752803 1187534 cri.go:89] found id: ""
	I1123 08:50:04.752829 1187534 logs.go:282] 0 containers: []
	W1123 08:50:04.752837 1187534 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:50:04.752846 1187534 logs.go:123] Gathering logs for CRI-O ...
	I1123 08:50:04.752857 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 08:50:04.807802 1187534 logs.go:123] Gathering logs for container status ...
	I1123 08:50:04.807838 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:50:04.837479 1187534 logs.go:123] Gathering logs for kubelet ...
	I1123 08:50:04.837508 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:50:04.953068 1187534 logs.go:123] Gathering logs for dmesg ...
	I1123 08:50:04.953105 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:50:04.973388 1187534 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:50:04.973418 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:50:05.048855 1187534 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:50:05.048876 1187534 logs.go:123] Gathering logs for kube-apiserver [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387] ...
	I1123 08:50:05.048890 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:05.081235 1187534 logs.go:123] Gathering logs for kube-scheduler [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37] ...
	I1123 08:50:05.081269 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:05.140155 1187534 logs.go:123] Gathering logs for kube-controller-manager [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417] ...
	I1123 08:50:05.140192 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:02.859706 1202718 out.go:252] * Updating the running docker "pause-041000" container ...
	I1123 08:50:02.859760 1202718 machine.go:94] provisionDockerMachine start ...
	I1123 08:50:02.859900 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:02.878321 1202718 main.go:143] libmachine: Using SSH client type: native
	I1123 08:50:02.878655 1202718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34487 <nil> <nil>}
	I1123 08:50:02.878671 1202718 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:50:03.030841 1202718 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-041000
	
	I1123 08:50:03.030935 1202718 ubuntu.go:182] provisioning hostname "pause-041000"
	I1123 08:50:03.031030 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:03.049259 1202718 main.go:143] libmachine: Using SSH client type: native
	I1123 08:50:03.049590 1202718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34487 <nil> <nil>}
	I1123 08:50:03.049605 1202718 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-041000 && echo "pause-041000" | sudo tee /etc/hostname
	I1123 08:50:03.212903 1202718 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-041000
	
	I1123 08:50:03.213004 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:03.233384 1202718 main.go:143] libmachine: Using SSH client type: native
	I1123 08:50:03.233720 1202718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34487 <nil> <nil>}
	I1123 08:50:03.233743 1202718 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-041000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-041000/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-041000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:50:03.383588 1202718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:50:03.383613 1202718 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:50:03.383641 1202718 ubuntu.go:190] setting up certificates
	I1123 08:50:03.383650 1202718 provision.go:84] configureAuth start
	I1123 08:50:03.383708 1202718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-041000
	I1123 08:50:03.400794 1202718 provision.go:143] copyHostCerts
	I1123 08:50:03.400864 1202718 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:50:03.400878 1202718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:50:03.400956 1202718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:50:03.401057 1202718 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:50:03.401063 1202718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:50:03.401089 1202718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:50:03.401180 1202718 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:50:03.401185 1202718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:50:03.401209 1202718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:50:03.401254 1202718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.pause-041000 san=[127.0.0.1 192.168.85.2 localhost minikube pause-041000]
	I1123 08:50:03.463281 1202718 provision.go:177] copyRemoteCerts
	I1123 08:50:03.463389 1202718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:50:03.463438 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:03.483693 1202718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34487 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/pause-041000/id_rsa Username:docker}
	I1123 08:50:03.586812 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:50:03.604225 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:50:03.629674 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1123 08:50:03.652495 1202718 provision.go:87] duration metric: took 268.822644ms to configureAuth
	I1123 08:50:03.652528 1202718 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:50:03.652755 1202718 config.go:182] Loaded profile config "pause-041000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:50:03.652868 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:03.670309 1202718 main.go:143] libmachine: Using SSH client type: native
	I1123 08:50:03.670632 1202718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34487 <nil> <nil>}
	I1123 08:50:03.670651 1202718 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:50:09.054698 1202718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:50:09.054722 1202718 machine.go:97] duration metric: took 6.194953541s to provisionDockerMachine
	I1123 08:50:09.054733 1202718 start.go:293] postStartSetup for "pause-041000" (driver="docker")
	I1123 08:50:09.054744 1202718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:50:09.054821 1202718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:50:09.054867 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:09.072544 1202718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34487 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/pause-041000/id_rsa Username:docker}
	I1123 08:50:09.174967 1202718 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:50:09.178258 1202718 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:50:09.178284 1202718 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:50:09.178295 1202718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:50:09.178345 1202718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:50:09.178423 1202718 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:50:09.178537 1202718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:50:09.185618 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:50:09.203435 1202718 start.go:296] duration metric: took 148.687196ms for postStartSetup
	I1123 08:50:09.203558 1202718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:50:09.203624 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:09.220265 1202718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34487 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/pause-041000/id_rsa Username:docker}
	I1123 08:50:09.324308 1202718 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:50:09.329001 1202718 fix.go:56] duration metric: took 6.489633581s for fixHost
	I1123 08:50:09.329026 1202718 start.go:83] releasing machines lock for "pause-041000", held for 6.48968322s
	I1123 08:50:09.329100 1202718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-041000
	I1123 08:50:09.346016 1202718 ssh_runner.go:195] Run: cat /version.json
	I1123 08:50:09.346079 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:09.346365 1202718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:50:09.346419 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:09.363478 1202718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34487 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/pause-041000/id_rsa Username:docker}
	I1123 08:50:09.377392 1202718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34487 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/pause-041000/id_rsa Username:docker}
	I1123 08:50:09.552888 1202718 ssh_runner.go:195] Run: systemctl --version
	I1123 08:50:09.559415 1202718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:50:09.606987 1202718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:50:09.612301 1202718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:50:09.612395 1202718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:50:09.620593 1202718 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:50:09.620626 1202718 start.go:496] detecting cgroup driver to use...
	I1123 08:50:09.620668 1202718 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:50:09.620736 1202718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:50:09.637073 1202718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:50:09.650918 1202718 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:50:09.650988 1202718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:50:09.668351 1202718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:50:09.682624 1202718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:50:09.824310 1202718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:50:09.966950 1202718 docker.go:234] disabling docker service ...
	I1123 08:50:09.967075 1202718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:50:09.982014 1202718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:50:09.994750 1202718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:50:10.138241 1202718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:50:10.281483 1202718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:50:10.294662 1202718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:50:10.308807 1202718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:50:10.308888 1202718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.318107 1202718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:50:10.318173 1202718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.327451 1202718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.336416 1202718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.345267 1202718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:50:10.353320 1202718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.362146 1202718 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.370790 1202718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.380268 1202718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:50:10.388219 1202718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:50:10.395758 1202718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:50:10.534792 1202718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:50:10.759290 1202718 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:50:10.759364 1202718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:50:10.763575 1202718 start.go:564] Will wait 60s for crictl version
	I1123 08:50:10.763684 1202718 ssh_runner.go:195] Run: which crictl
	I1123 08:50:10.767361 1202718 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:50:10.797273 1202718 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:50:10.797442 1202718 ssh_runner.go:195] Run: crio --version
	I1123 08:50:10.844739 1202718 ssh_runner.go:195] Run: crio --version
	I1123 08:50:10.884358 1202718 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:50:07.667598 1187534 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:50:07.668080 1187534 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:50:07.668127 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:50:07.668181 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:50:07.698967 1187534 cri.go:89] found id: "39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:07.698990 1187534 cri.go:89] found id: ""
	I1123 08:50:07.698999 1187534 logs.go:282] 1 containers: [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387]
	I1123 08:50:07.699055 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:07.702736 1187534 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 08:50:07.702827 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:50:07.730675 1187534 cri.go:89] found id: ""
	I1123 08:50:07.730698 1187534 logs.go:282] 0 containers: []
	W1123 08:50:07.730706 1187534 logs.go:284] No container was found matching "etcd"
	I1123 08:50:07.730714 1187534 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 08:50:07.730798 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:50:07.760933 1187534 cri.go:89] found id: ""
	I1123 08:50:07.760959 1187534 logs.go:282] 0 containers: []
	W1123 08:50:07.760967 1187534 logs.go:284] No container was found matching "coredns"
	I1123 08:50:07.760976 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:50:07.761038 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:50:07.787584 1187534 cri.go:89] found id: "ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:07.787608 1187534 cri.go:89] found id: ""
	I1123 08:50:07.787616 1187534 logs.go:282] 1 containers: [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37]
	I1123 08:50:07.787674 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:07.791296 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:50:07.791367 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:50:07.816321 1187534 cri.go:89] found id: ""
	I1123 08:50:07.816344 1187534 logs.go:282] 0 containers: []
	W1123 08:50:07.816352 1187534 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:50:07.816358 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:50:07.816417 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:50:07.846824 1187534 cri.go:89] found id: "82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:07.846848 1187534 cri.go:89] found id: ""
	I1123 08:50:07.846856 1187534 logs.go:282] 1 containers: [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417]
	I1123 08:50:07.846912 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:07.850535 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 08:50:07.850613 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:50:07.876504 1187534 cri.go:89] found id: ""
	I1123 08:50:07.876528 1187534 logs.go:282] 0 containers: []
	W1123 08:50:07.876537 1187534 logs.go:284] No container was found matching "kindnet"
	I1123 08:50:07.876543 1187534 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:50:07.876619 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:50:07.903205 1187534 cri.go:89] found id: ""
	I1123 08:50:07.903238 1187534 logs.go:282] 0 containers: []
	W1123 08:50:07.903247 1187534 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:50:07.903275 1187534 logs.go:123] Gathering logs for kube-controller-manager [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417] ...
	I1123 08:50:07.903299 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:07.927840 1187534 logs.go:123] Gathering logs for CRI-O ...
	I1123 08:50:07.927867 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 08:50:07.984041 1187534 logs.go:123] Gathering logs for container status ...
	I1123 08:50:07.984077 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:50:08.014822 1187534 logs.go:123] Gathering logs for kubelet ...
	I1123 08:50:08.014851 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:50:08.129119 1187534 logs.go:123] Gathering logs for dmesg ...
	I1123 08:50:08.129154 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:50:08.147725 1187534 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:50:08.147759 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:50:08.214394 1187534 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:50:08.214415 1187534 logs.go:123] Gathering logs for kube-apiserver [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387] ...
	I1123 08:50:08.214436 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:08.250995 1187534 logs.go:123] Gathering logs for kube-scheduler [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37] ...
	I1123 08:50:08.251025 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:10.809133 1187534 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:50:10.809510 1187534 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:50:10.809559 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:50:10.809617 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:50:10.846334 1187534 cri.go:89] found id: "39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:10.846357 1187534 cri.go:89] found id: ""
	I1123 08:50:10.846365 1187534 logs.go:282] 1 containers: [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387]
	I1123 08:50:10.846418 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:10.850664 1187534 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 08:50:10.850740 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:50:10.884139 1187534 cri.go:89] found id: ""
	I1123 08:50:10.884160 1187534 logs.go:282] 0 containers: []
	W1123 08:50:10.884168 1187534 logs.go:284] No container was found matching "etcd"
	I1123 08:50:10.884177 1187534 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 08:50:10.884236 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:50:10.930157 1187534 cri.go:89] found id: ""
	I1123 08:50:10.930181 1187534 logs.go:282] 0 containers: []
	W1123 08:50:10.930190 1187534 logs.go:284] No container was found matching "coredns"
	I1123 08:50:10.930197 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:50:10.930257 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:50:10.971456 1187534 cri.go:89] found id: "ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:10.971476 1187534 cri.go:89] found id: ""
	I1123 08:50:10.971483 1187534 logs.go:282] 1 containers: [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37]
	I1123 08:50:10.971541 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:10.975608 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:50:10.975681 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:50:11.023586 1187534 cri.go:89] found id: ""
	I1123 08:50:11.023607 1187534 logs.go:282] 0 containers: []
	W1123 08:50:11.023616 1187534 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:50:11.023623 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:50:11.023684 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:50:11.075063 1187534 cri.go:89] found id: "82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:11.075083 1187534 cri.go:89] found id: ""
	I1123 08:50:11.075091 1187534 logs.go:282] 1 containers: [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417]
	I1123 08:50:11.075146 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:11.079349 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 08:50:11.079419 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:50:11.123128 1187534 cri.go:89] found id: ""
	I1123 08:50:11.123149 1187534 logs.go:282] 0 containers: []
	W1123 08:50:11.123158 1187534 logs.go:284] No container was found matching "kindnet"
	I1123 08:50:11.123165 1187534 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:50:11.123266 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:50:11.164466 1187534 cri.go:89] found id: ""
	I1123 08:50:11.164488 1187534 logs.go:282] 0 containers: []
	W1123 08:50:11.164497 1187534 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:50:11.164505 1187534 logs.go:123] Gathering logs for CRI-O ...
	I1123 08:50:11.164528 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 08:50:11.240514 1187534 logs.go:123] Gathering logs for container status ...
	I1123 08:50:11.240595 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:50:11.285980 1187534 logs.go:123] Gathering logs for kubelet ...
	I1123 08:50:11.286055 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:50:11.412788 1187534 logs.go:123] Gathering logs for dmesg ...
	I1123 08:50:11.412843 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:50:11.430659 1187534 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:50:11.430792 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:50:11.519194 1187534 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:50:11.519272 1187534 logs.go:123] Gathering logs for kube-apiserver [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387] ...
	I1123 08:50:11.519300 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:11.561272 1187534 logs.go:123] Gathering logs for kube-scheduler [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37] ...
	I1123 08:50:11.561572 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:11.631461 1187534 logs.go:123] Gathering logs for kube-controller-manager [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417] ...
	I1123 08:50:11.631513 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:10.887564 1202718 cli_runner.go:164] Run: docker network inspect pause-041000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:50:10.905887 1202718 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:50:10.910554 1202718 kubeadm.go:884] updating cluster {Name:pause-041000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-041000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:50:10.910702 1202718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:50:10.910753 1202718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:50:10.952072 1202718 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:50:10.952091 1202718 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:50:10.952147 1202718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:50:10.984995 1202718 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:50:10.985014 1202718 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:50:10.985021 1202718 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 08:50:10.985130 1202718 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-041000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-041000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:50:10.985204 1202718 ssh_runner.go:195] Run: crio config
	I1123 08:50:11.060189 1202718 cni.go:84] Creating CNI manager for ""
	I1123 08:50:11.060260 1202718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:50:11.060298 1202718 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:50:11.060353 1202718 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-041000 NodeName:pause-041000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:50:11.060526 1202718 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-041000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:50:11.060634 1202718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:50:11.070674 1202718 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:50:11.070796 1202718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:50:11.083158 1202718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1123 08:50:11.101587 1202718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:50:11.119076 1202718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1123 08:50:11.137322 1202718 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:50:11.141917 1202718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:50:11.338814 1202718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:50:11.354216 1202718 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000 for IP: 192.168.85.2
	I1123 08:50:11.354237 1202718 certs.go:195] generating shared ca certs ...
	I1123 08:50:11.354254 1202718 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:50:11.354380 1202718 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:50:11.354438 1202718 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:50:11.354455 1202718 certs.go:257] generating profile certs ...
	I1123 08:50:11.354544 1202718 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/client.key
	I1123 08:50:11.354612 1202718 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/apiserver.key.6d8251ec
	I1123 08:50:11.354654 1202718 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/proxy-client.key
	I1123 08:50:11.354767 1202718 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:50:11.354801 1202718 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:50:11.354814 1202718 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:50:11.354842 1202718 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:50:11.354875 1202718 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:50:11.354902 1202718 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:50:11.354949 1202718 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:50:11.355573 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:50:11.378587 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:50:11.400527 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:50:11.423761 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:50:11.446154 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 08:50:11.469247 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:50:11.489147 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:50:11.508933 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1123 08:50:11.547418 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:50:11.575101 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:50:11.609402 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 08:50:11.630628 1202718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:50:11.647973 1202718 ssh_runner.go:195] Run: openssl version
	I1123 08:50:11.654688 1202718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:50:11.663398 1202718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:50:11.669734 1202718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:50:11.669798 1202718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:50:11.712474 1202718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:50:11.720366 1202718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:50:11.728600 1202718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:50:11.732183 1202718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:50:11.732280 1202718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:50:11.773019 1202718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:50:11.780768 1202718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:50:11.788755 1202718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:50:11.792354 1202718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:50:11.792418 1202718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:50:11.833257 1202718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:50:11.841204 1202718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:50:11.844824 1202718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:50:11.885657 1202718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:50:11.926621 1202718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:50:11.967474 1202718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:50:12.012380 1202718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:50:12.055124 1202718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:50:12.096711 1202718 kubeadm.go:401] StartCluster: {Name:pause-041000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-041000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:50:12.096832 1202718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:50:12.096898 1202718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:50:12.124886 1202718 cri.go:89] found id: "45dc8732cc188eafe78085325272d3d984a7f717166cd0187bd52e340fe5512f"
	I1123 08:50:12.124907 1202718 cri.go:89] found id: "9b1e6a2484dc967bcde959062922c706ed1adf1100c5f4604bf3898820f2b5ed"
	I1123 08:50:12.124912 1202718 cri.go:89] found id: "014caef83785d9e215263867e2ad026cec906105ca6cef8f64db232c473788dc"
	I1123 08:50:12.124915 1202718 cri.go:89] found id: "7868a019e831a2a996406b4d89aeecb6d8d06950ed992d7b8da380cb642f8763"
	I1123 08:50:12.124919 1202718 cri.go:89] found id: "0f807846d8d95db888a7aa3b3464682121926c4e8c8de2fb4a034403188441bc"
	I1123 08:50:12.124922 1202718 cri.go:89] found id: "08bc69e75e392dc8dcaa5e734aca0453b6fa3505a2d07872309a1e92aa887ca2"
	I1123 08:50:12.124926 1202718 cri.go:89] found id: "daa63f34218f6ccf5f1c984a5efedf997465def664bd74e9558bce3ec2095793"
	I1123 08:50:12.124963 1202718 cri.go:89] found id: ""
	I1123 08:50:12.125023 1202718 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 08:50:12.136286 1202718 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:50:12Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:50:12.136364 1202718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:50:12.144016 1202718 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:50:12.144037 1202718 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:50:12.144107 1202718 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:50:12.151160 1202718 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:50:12.151842 1202718 kubeconfig.go:125] found "pause-041000" server: "https://192.168.85.2:8443"
	I1123 08:50:12.152627 1202718 kapi.go:59] client config for pause-041000: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/client.crt", KeyFile:"/home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/client.key", CAFile:"/home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 08:50:12.153122 1202718 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 08:50:12.153142 1202718 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 08:50:12.153148 1202718 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 08:50:12.153155 1202718 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 08:50:12.153165 1202718 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 08:50:12.153428 1202718 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:50:12.160964 1202718 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 08:50:12.161061 1202718 kubeadm.go:602] duration metric: took 17.010572ms to restartPrimaryControlPlane
	I1123 08:50:12.161079 1202718 kubeadm.go:403] duration metric: took 64.377103ms to StartCluster
	I1123 08:50:12.161095 1202718 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:50:12.161167 1202718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:50:12.162027 1202718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:50:12.162280 1202718 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:50:12.162715 1202718 config.go:182] Loaded profile config "pause-041000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:50:12.162768 1202718 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:50:12.166499 1202718 out.go:179] * Verifying Kubernetes components...
	I1123 08:50:12.166500 1202718 out.go:179] * Enabled addons: 
	I1123 08:50:12.169366 1202718 addons.go:530] duration metric: took 6.601147ms for enable addons: enabled=[]
	I1123 08:50:12.169453 1202718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:50:12.327734 1202718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:50:12.344308 1202718 node_ready.go:35] waiting up to 6m0s for node "pause-041000" to be "Ready" ...
	I1123 08:50:14.168647 1187534 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:50:14.169003 1187534 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:50:14.169051 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:50:14.169132 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:50:14.230306 1187534 cri.go:89] found id: "39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:14.230330 1187534 cri.go:89] found id: ""
	I1123 08:50:14.230338 1187534 logs.go:282] 1 containers: [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387]
	I1123 08:50:14.230394 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:14.239126 1187534 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 08:50:14.239212 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:50:14.298328 1187534 cri.go:89] found id: ""
	I1123 08:50:14.298354 1187534 logs.go:282] 0 containers: []
	W1123 08:50:14.298364 1187534 logs.go:284] No container was found matching "etcd"
	I1123 08:50:14.298378 1187534 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 08:50:14.298610 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:50:14.351914 1187534 cri.go:89] found id: ""
	I1123 08:50:14.351948 1187534 logs.go:282] 0 containers: []
	W1123 08:50:14.351956 1187534 logs.go:284] No container was found matching "coredns"
	I1123 08:50:14.351964 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:50:14.352059 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:50:14.386478 1187534 cri.go:89] found id: "ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:14.386516 1187534 cri.go:89] found id: ""
	I1123 08:50:14.386524 1187534 logs.go:282] 1 containers: [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37]
	I1123 08:50:14.386591 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:14.390064 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:50:14.390142 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:50:14.437347 1187534 cri.go:89] found id: ""
	I1123 08:50:14.437374 1187534 logs.go:282] 0 containers: []
	W1123 08:50:14.437391 1187534 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:50:14.437397 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:50:14.437469 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:50:14.493834 1187534 cri.go:89] found id: "82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:14.493860 1187534 cri.go:89] found id: ""
	I1123 08:50:14.493868 1187534 logs.go:282] 1 containers: [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417]
	I1123 08:50:14.493934 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:14.503076 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 08:50:14.503166 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:50:14.555430 1187534 cri.go:89] found id: ""
	I1123 08:50:14.555470 1187534 logs.go:282] 0 containers: []
	W1123 08:50:14.555480 1187534 logs.go:284] No container was found matching "kindnet"
	I1123 08:50:14.555488 1187534 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:50:14.555559 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:50:14.609105 1187534 cri.go:89] found id: ""
	I1123 08:50:14.609132 1187534 logs.go:282] 0 containers: []
	W1123 08:50:14.609159 1187534 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:50:14.609168 1187534 logs.go:123] Gathering logs for dmesg ...
	I1123 08:50:14.609183 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:50:14.637863 1187534 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:50:14.637904 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:50:14.766527 1187534 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:50:14.766558 1187534 logs.go:123] Gathering logs for kube-apiserver [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387] ...
	I1123 08:50:14.766573 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:14.821424 1187534 logs.go:123] Gathering logs for kube-scheduler [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37] ...
	I1123 08:50:14.821456 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:14.917613 1187534 logs.go:123] Gathering logs for kube-controller-manager [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417] ...
	I1123 08:50:14.917697 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:14.983379 1187534 logs.go:123] Gathering logs for CRI-O ...
	I1123 08:50:14.983404 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 08:50:15.064735 1187534 logs.go:123] Gathering logs for container status ...
	I1123 08:50:15.064814 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:50:15.131449 1187534 logs.go:123] Gathering logs for kubelet ...
	I1123 08:50:15.131529 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:50:17.776295 1187534 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:50:19.512201 1202718 node_ready.go:49] node "pause-041000" is "Ready"
	I1123 08:50:19.512232 1202718 node_ready.go:38] duration metric: took 7.167870988s for node "pause-041000" to be "Ready" ...
	I1123 08:50:19.512245 1202718 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:50:19.512302 1202718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:50:19.528916 1202718 api_server.go:72] duration metric: took 7.366598415s to wait for apiserver process to appear ...
	I1123 08:50:19.528941 1202718 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:50:19.528959 1202718 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:50:19.656689 1202718 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:50:19.656787 1202718 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:50:20.029057 1202718 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:50:20.039141 1202718 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:50:20.039353 1202718 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:50:20.529660 1202718 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:50:20.537742 1202718 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 08:50:20.538843 1202718 api_server.go:141] control plane version: v1.34.1
	I1123 08:50:20.538868 1202718 api_server.go:131] duration metric: took 1.009920448s to wait for apiserver health ...
	I1123 08:50:20.538877 1202718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:50:20.542229 1202718 system_pods.go:59] 7 kube-system pods found
	I1123 08:50:20.542267 1202718 system_pods.go:61] "coredns-66bc5c9577-p8fzx" [449bd814-b2c4-445c-8341-2d6fd4035f0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:50:20.542279 1202718 system_pods.go:61] "etcd-pause-041000" [686e9429-7f08-4647-b76a-ea1509e228e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:50:20.542285 1202718 system_pods.go:61] "kindnet-pzr9x" [26b37d37-77bd-4372-9d14-476cd4f1e851] Running
	I1123 08:50:20.542291 1202718 system_pods.go:61] "kube-apiserver-pause-041000" [22756ad9-71b8-43a4-ae5d-d736b1925a32] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:50:20.542303 1202718 system_pods.go:61] "kube-controller-manager-pause-041000" [a677a284-6f9d-4054-99b8-ce3ec472d3d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:50:20.542310 1202718 system_pods.go:61] "kube-proxy-jzpjt" [d43fb6ce-e107-46b2-9d52-19736141dc91] Running
	I1123 08:50:20.542319 1202718 system_pods.go:61] "kube-scheduler-pause-041000" [e3f5cb4f-2087-4f60-bcfa-8dc8a2f6a21c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:50:20.542329 1202718 system_pods.go:74] duration metric: took 3.445841ms to wait for pod list to return data ...
	I1123 08:50:20.542339 1202718 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:50:20.544825 1202718 default_sa.go:45] found service account: "default"
	I1123 08:50:20.544851 1202718 default_sa.go:55] duration metric: took 2.503343ms for default service account to be created ...
	I1123 08:50:20.544861 1202718 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:50:20.547535 1202718 system_pods.go:86] 7 kube-system pods found
	I1123 08:50:20.547567 1202718 system_pods.go:89] "coredns-66bc5c9577-p8fzx" [449bd814-b2c4-445c-8341-2d6fd4035f0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:50:20.547577 1202718 system_pods.go:89] "etcd-pause-041000" [686e9429-7f08-4647-b76a-ea1509e228e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:50:20.547584 1202718 system_pods.go:89] "kindnet-pzr9x" [26b37d37-77bd-4372-9d14-476cd4f1e851] Running
	I1123 08:50:20.547590 1202718 system_pods.go:89] "kube-apiserver-pause-041000" [22756ad9-71b8-43a4-ae5d-d736b1925a32] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:50:20.547598 1202718 system_pods.go:89] "kube-controller-manager-pause-041000" [a677a284-6f9d-4054-99b8-ce3ec472d3d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:50:20.547606 1202718 system_pods.go:89] "kube-proxy-jzpjt" [d43fb6ce-e107-46b2-9d52-19736141dc91] Running
	I1123 08:50:20.547616 1202718 system_pods.go:89] "kube-scheduler-pause-041000" [e3f5cb4f-2087-4f60-bcfa-8dc8a2f6a21c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:50:20.547623 1202718 system_pods.go:126] duration metric: took 2.756129ms to wait for k8s-apps to be running ...
	I1123 08:50:20.547635 1202718 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:50:20.547692 1202718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:50:20.561197 1202718 system_svc.go:56] duration metric: took 13.553464ms WaitForService to wait for kubelet
	I1123 08:50:20.561224 1202718 kubeadm.go:587] duration metric: took 8.398912422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:50:20.561240 1202718 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:50:20.564454 1202718 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:50:20.564485 1202718 node_conditions.go:123] node cpu capacity is 2
	I1123 08:50:20.564499 1202718 node_conditions.go:105] duration metric: took 3.252821ms to run NodePressure ...
	I1123 08:50:20.564511 1202718 start.go:242] waiting for startup goroutines ...
	I1123 08:50:20.564518 1202718 start.go:247] waiting for cluster config update ...
	I1123 08:50:20.564532 1202718 start.go:256] writing updated cluster config ...
	I1123 08:50:20.564846 1202718 ssh_runner.go:195] Run: rm -f paused
	I1123 08:50:20.568495 1202718 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:50:20.569112 1202718 kapi.go:59] client config for pause-041000: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/client.crt", KeyFile:"/home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/client.key", CAFile:"/home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 08:50:20.572578 1202718 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p8fzx" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:50:22.578660 1202718 pod_ready.go:104] pod "coredns-66bc5c9577-p8fzx" is not "Ready", error: <nil>
	I1123 08:50:22.776905 1187534 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1123 08:50:22.776972 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:50:22.777039 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:50:22.814040 1187534 cri.go:89] found id: "5abe9dd9f1f9662dd8f041afce9a5d1e1922dff28712b78a4a42382a6249645b"
	I1123 08:50:22.814058 1187534 cri.go:89] found id: "39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:22.814063 1187534 cri.go:89] found id: ""
	I1123 08:50:22.814070 1187534 logs.go:282] 2 containers: [5abe9dd9f1f9662dd8f041afce9a5d1e1922dff28712b78a4a42382a6249645b 39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387]
	I1123 08:50:22.814135 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:22.818324 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:22.822093 1187534 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 08:50:22.822166 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:50:22.853928 1187534 cri.go:89] found id: ""
	I1123 08:50:22.853954 1187534 logs.go:282] 0 containers: []
	W1123 08:50:22.853963 1187534 logs.go:284] No container was found matching "etcd"
	I1123 08:50:22.853970 1187534 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 08:50:22.854031 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:50:22.896180 1187534 cri.go:89] found id: ""
	I1123 08:50:22.896208 1187534 logs.go:282] 0 containers: []
	W1123 08:50:22.896217 1187534 logs.go:284] No container was found matching "coredns"
	I1123 08:50:22.896223 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:50:22.896281 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:50:22.940549 1187534 cri.go:89] found id: "ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:22.940575 1187534 cri.go:89] found id: ""
	I1123 08:50:22.940583 1187534 logs.go:282] 1 containers: [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37]
	I1123 08:50:22.940651 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:22.944486 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:50:22.944555 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:50:22.974972 1187534 cri.go:89] found id: ""
	I1123 08:50:22.974999 1187534 logs.go:282] 0 containers: []
	W1123 08:50:22.975008 1187534 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:50:22.975015 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:50:22.975074 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:50:23.006880 1187534 cri.go:89] found id: "82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:23.006906 1187534 cri.go:89] found id: ""
	I1123 08:50:23.006914 1187534 logs.go:282] 1 containers: [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417]
	I1123 08:50:23.006976 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:23.010823 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 08:50:23.010898 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:50:23.038139 1187534 cri.go:89] found id: ""
	I1123 08:50:23.038161 1187534 logs.go:282] 0 containers: []
	W1123 08:50:23.038170 1187534 logs.go:284] No container was found matching "kindnet"
	I1123 08:50:23.038176 1187534 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:50:23.038235 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:50:23.083217 1187534 cri.go:89] found id: ""
	I1123 08:50:23.083238 1187534 logs.go:282] 0 containers: []
	W1123 08:50:23.083246 1187534 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:50:23.083260 1187534 logs.go:123] Gathering logs for kube-apiserver [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387] ...
	I1123 08:50:23.083281 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:23.127751 1187534 logs.go:123] Gathering logs for kube-scheduler [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37] ...
	I1123 08:50:23.127789 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:23.195977 1187534 logs.go:123] Gathering logs for kube-controller-manager [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417] ...
	I1123 08:50:23.196013 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:23.229217 1187534 logs.go:123] Gathering logs for CRI-O ...
	I1123 08:50:23.229241 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 08:50:23.298646 1187534 logs.go:123] Gathering logs for container status ...
	I1123 08:50:23.298734 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:50:23.352858 1187534 logs.go:123] Gathering logs for kubelet ...
	I1123 08:50:23.352934 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:50:23.470434 1187534 logs.go:123] Gathering logs for dmesg ...
	I1123 08:50:23.470470 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:50:23.488779 1187534 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:50:23.488808 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1123 08:50:24.578410 1202718 pod_ready.go:94] pod "coredns-66bc5c9577-p8fzx" is "Ready"
	I1123 08:50:24.578440 1202718 pod_ready.go:86] duration metric: took 4.005835132s for pod "coredns-66bc5c9577-p8fzx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:24.581302 1202718 pod_ready.go:83] waiting for pod "etcd-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:50:26.586439 1202718 pod_ready.go:104] pod "etcd-pause-041000" is not "Ready", error: <nil>
	I1123 08:50:28.087562 1202718 pod_ready.go:94] pod "etcd-pause-041000" is "Ready"
	I1123 08:50:28.087590 1202718 pod_ready.go:86] duration metric: took 3.506265831s for pod "etcd-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.090226 1202718 pod_ready.go:83] waiting for pod "kube-apiserver-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.095484 1202718 pod_ready.go:94] pod "kube-apiserver-pause-041000" is "Ready"
	I1123 08:50:28.095514 1202718 pod_ready.go:86] duration metric: took 5.262375ms for pod "kube-apiserver-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.098064 1202718 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.102787 1202718 pod_ready.go:94] pod "kube-controller-manager-pause-041000" is "Ready"
	I1123 08:50:28.102816 1202718 pod_ready.go:86] duration metric: took 4.728835ms for pod "kube-controller-manager-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.105346 1202718 pod_ready.go:83] waiting for pod "kube-proxy-jzpjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.286092 1202718 pod_ready.go:94] pod "kube-proxy-jzpjt" is "Ready"
	I1123 08:50:28.286122 1202718 pod_ready.go:86] duration metric: took 180.750695ms for pod "kube-proxy-jzpjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.486102 1202718 pod_ready.go:83] waiting for pod "kube-scheduler-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:29.286171 1202718 pod_ready.go:94] pod "kube-scheduler-pause-041000" is "Ready"
	I1123 08:50:29.286200 1202718 pod_ready.go:86] duration metric: took 800.068363ms for pod "kube-scheduler-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:29.286213 1202718 pod_ready.go:40] duration metric: took 8.717685176s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:50:29.336965 1202718 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:50:29.339907 1202718 out.go:179] * Done! kubectl is now configured to use "pause-041000" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.490002711Z" level=info msg="Created container 5706a7e3c437e929a3040132d53857e200f691e33708663ad34cf3739ffa9fa5: kube-system/etcd-pause-041000/etcd" id=23a3d8bd-7d03-4fdb-bb07-b536042af6df name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.490921506Z" level=info msg="Starting container: 8930f41e87c1a562cfa6ee0b54b62abcdfa7cb763db50ee729cccd3950c35f8d" id=2211fa7c-c178-44a9-aa33-5c00b6f456a3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.49939631Z" level=info msg="Starting container: 5706a7e3c437e929a3040132d53857e200f691e33708663ad34cf3739ffa9fa5" id=7755e48a-872b-4075-83ee-bb3b711df221 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.501095162Z" level=info msg="Starting container: 3ad7b30765627018f51db658fa83ba644bdf45e1aba371d0fb38f45a91374bcd" id=1695823e-594c-4813-bc91-9a0716296136 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.502846065Z" level=info msg="Started container" PID=2388 containerID=8930f41e87c1a562cfa6ee0b54b62abcdfa7cb763db50ee729cccd3950c35f8d description=kube-system/kindnet-pzr9x/kindnet-cni id=2211fa7c-c178-44a9-aa33-5c00b6f456a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e2c95d12a6b3610a922f16a275c4d0a24af5fa8872275fdda90330aa3b49bfd
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.504364581Z" level=info msg="Started container" PID=2385 containerID=5706a7e3c437e929a3040132d53857e200f691e33708663ad34cf3739ffa9fa5 description=kube-system/etcd-pause-041000/etcd id=7755e48a-872b-4075-83ee-bb3b711df221 name=/runtime.v1.RuntimeService/StartContainer sandboxID=888cec96d3d406ea3499df9b84f2f6829ba443323ea1b3f27f5919cb99374ece
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.507896451Z" level=info msg="Started container" PID=2372 containerID=3ad7b30765627018f51db658fa83ba644bdf45e1aba371d0fb38f45a91374bcd description=kube-system/kube-controller-manager-pause-041000/kube-controller-manager id=1695823e-594c-4813-bc91-9a0716296136 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b902b4f9183bd972221bff77da33f1b1501315fc678e1c4355f39446fe4769f
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.570618426Z" level=info msg="Created container 72f2b8001c28bfca63c62972110a9ad36c4820ca3ef16e526e0a844ca887492c: kube-system/kube-proxy-jzpjt/kube-proxy" id=e42c951c-a23d-444d-b080-63383b9a0a7f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.57136344Z" level=info msg="Starting container: 72f2b8001c28bfca63c62972110a9ad36c4820ca3ef16e526e0a844ca887492c" id=c11efcb3-7e32-4b11-9b71-26e4f576c3d1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.573643709Z" level=info msg="Started container" PID=2395 containerID=72f2b8001c28bfca63c62972110a9ad36c4820ca3ef16e526e0a844ca887492c description=kube-system/kube-proxy-jzpjt/kube-proxy id=c11efcb3-7e32-4b11-9b71-26e4f576c3d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=efab7da3b2afb9e5e93bd4746c5403e5084a57446bd1702bd47a19e3729c83bf
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.844972685Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.864163933Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.864200814Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.864227841Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.872143505Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.872325432Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.872406792Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.877102931Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.877335811Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.877420707Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.884778232Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.884934536Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.885017389Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.889416411Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.889729822Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	72f2b8001c28b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   19 seconds ago       Running             kube-proxy                1                   efab7da3b2afb       kube-proxy-jzpjt                       kube-system
	8930f41e87c1a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   19 seconds ago       Running             kindnet-cni               1                   5e2c95d12a6b3       kindnet-pzr9x                          kube-system
	5706a7e3c437e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   19 seconds ago       Running             etcd                      1                   888cec96d3d40       etcd-pause-041000                      kube-system
	3ad7b30765627       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   19 seconds ago       Running             kube-controller-manager   1                   0b902b4f9183b       kube-controller-manager-pause-041000   kube-system
	0f27e4bf55b60       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago       Running             kube-scheduler            1                   14991bae5f05b       kube-scheduler-pause-041000            kube-system
	47c71e2fb9574       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago       Running             kube-apiserver            1                   f0399c83b9c60       kube-apiserver-pause-041000            kube-system
	7f4c49daf75ac       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   20 seconds ago       Running             coredns                   1                   c05ed95a18a9e       coredns-66bc5c9577-p8fzx               kube-system
	45dc8732cc188       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   32 seconds ago       Exited              coredns                   0                   c05ed95a18a9e       coredns-66bc5c9577-p8fzx               kube-system
	9b1e6a2484dc9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   5e2c95d12a6b3       kindnet-pzr9x                          kube-system
	014caef83785d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   efab7da3b2afb       kube-proxy-jzpjt                       kube-system
	7868a019e831a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   f0399c83b9c60       kube-apiserver-pause-041000            kube-system
	0f807846d8d95       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   14991bae5f05b       kube-scheduler-pause-041000            kube-system
	08bc69e75e392       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   0b902b4f9183b       kube-controller-manager-pause-041000   kube-system
	daa63f34218f6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   888cec96d3d40       etcd-pause-041000                      kube-system
	
	
	==> coredns [45dc8732cc188eafe78085325272d3d984a7f717166cd0187bd52e340fe5512f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41535 - 50635 "HINFO IN 7569954815826303863.104617039468642547. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024224512s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7f4c49daf75ac06cb4aa4e7ca85ebdd1cd16f76be10a2e5f73b954fc7c75b042] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40822 - 5964 "HINFO IN 4564197123057285760.4762303896874097968. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01681234s
	
	
	==> describe nodes <==
	Name:               pause-041000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-041000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=pause-041000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_49_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:49:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-041000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:50:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:49:59 +0000   Sun, 23 Nov 2025 08:49:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:49:59 +0000   Sun, 23 Nov 2025 08:49:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:49:59 +0000   Sun, 23 Nov 2025 08:49:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:49:59 +0000   Sun, 23 Nov 2025 08:49:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-041000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                0f3e7557-0353-4e9b-a0f6-47a4a416f8d8
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-p8fzx                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     74s
	  kube-system                 etcd-pause-041000                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         80s
	  kube-system                 kindnet-pzr9x                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      74s
	  kube-system                 kube-apiserver-pause-041000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-041000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-jzpjt                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-pause-041000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 73s                kube-proxy       
	  Normal   Starting                 12s                kube-proxy       
	  Normal   NodeHasSufficientPID     87s (x7 over 87s)  kubelet          Node pause-041000 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 87s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  87s (x7 over 87s)  kubelet          Node pause-041000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    87s (x7 over 87s)  kubelet          Node pause-041000 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 87s                kubelet          Starting kubelet.
	  Normal   Starting                 80s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 80s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  79s                kubelet          Node pause-041000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s                kubelet          Node pause-041000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s                kubelet          Node pause-041000 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           75s                node-controller  Node pause-041000 event: Registered Node pause-041000 in Controller
	  Normal   NodeReady                33s                kubelet          Node pause-041000 status is now: NodeReady
	  Normal   RegisteredNode           10s                node-controller  Node pause-041000 event: Registered Node pause-041000 in Controller
	
	
	==> dmesg <==
	[Nov23 08:23] overlayfs: idmapped layers are currently not supported
	[ +45.736894] overlayfs: idmapped layers are currently not supported
	[Nov23 08:25] overlayfs: idmapped layers are currently not supported
	[  +2.559069] overlayfs: idmapped layers are currently not supported
	[Nov23 08:26] overlayfs: idmapped layers are currently not supported
	[ +51.342642] overlayfs: idmapped layers are currently not supported
	[Nov23 08:28] overlayfs: idmapped layers are currently not supported
	[Nov23 08:32] overlayfs: idmapped layers are currently not supported
	[Nov23 08:33] overlayfs: idmapped layers are currently not supported
	[Nov23 08:34] overlayfs: idmapped layers are currently not supported
	[Nov23 08:35] overlayfs: idmapped layers are currently not supported
	[Nov23 08:36] overlayfs: idmapped layers are currently not supported
	[Nov23 08:37] overlayfs: idmapped layers are currently not supported
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5706a7e3c437e929a3040132d53857e200f691e33708663ad34cf3739ffa9fa5] <==
	{"level":"warn","ts":"2025-11-23T08:50:17.900275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:17.927903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:17.953245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:17.966172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:17.993735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.022499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.093116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.139425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.153535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.183633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.198590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.228490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.260058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.292880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.309269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.337409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.363981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.403066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.431680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.462288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.522644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.575614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.584838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.604009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.736083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40458","server-name":"","error":"EOF"}
	
	
	==> etcd [daa63f34218f6ccf5f1c984a5efedf997465def664bd74e9558bce3ec2095793] <==
	{"level":"warn","ts":"2025-11-23T08:49:09.194248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:49:09.224162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:49:09.292196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:49:09.297812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:49:09.341476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:49:09.357795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:49:09.533173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35138","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:50:03.840501Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-23T08:50:03.840557Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-041000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-23T08:50:03.840650Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T08:50:03.977861Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T08:50:03.977940Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:50:03.977962Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-23T08:50:03.978032Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-23T08:50:03.978102Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T08:50:03.978128Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T08:50:03.978137Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:50:03.978138Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-23T08:50:03.978176Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T08:50:03.978184Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T08:50:03.978191Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:50:03.981406Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-23T08:50:03.981485Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:50:03.981520Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T08:50:03.981528Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-041000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 08:50:32 up  9:32,  0 user,  load average: 3.41, 2.67, 2.22
	Linux pause-041000 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8930f41e87c1a562cfa6ee0b54b62abcdfa7cb763db50ee729cccd3950c35f8d] <==
	I1123 08:50:12.629566       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:50:12.630493       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:50:12.630678       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:50:12.630728       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:50:12.630767       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:50:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:50:12.844735       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:50:12.844825       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:50:12.844879       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:50:12.845929       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:50:19.645108       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:50:19.645234       1 metrics.go:72] Registering metrics
	I1123 08:50:19.645334       1 controller.go:711] "Syncing nftables rules"
	I1123 08:50:22.844564       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:50:22.844628       1 main.go:301] handling current node
	
	
	==> kindnet [9b1e6a2484dc967bcde959062922c706ed1adf1100c5f4604bf3898820f2b5ed] <==
	I1123 08:49:18.638872       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:49:18.639314       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:49:18.639992       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:49:18.640061       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:49:18.640102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:49:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:49:18.845346       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:49:18.848194       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:49:18.848290       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:49:18.848505       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:49:48.849974       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 08:49:48.850099       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:49:48.850215       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:49:48.928660       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 08:49:50.048588       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:49:50.048637       1 metrics.go:72] Registering metrics
	I1123 08:49:50.048732       1 controller.go:711] "Syncing nftables rules"
	I1123 08:49:58.851829       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:49:58.851883       1 main.go:301] handling current node
	
	
	==> kube-apiserver [47c71e2fb9574a058e5a5920e44ae120194a58b368bf86420a497c977179f436] <==
	I1123 08:50:19.602136       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 08:50:19.602437       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 08:50:19.602453       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 08:50:19.602621       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:50:19.607244       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:50:19.614209       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 08:50:19.614347       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:50:19.618104       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 08:50:19.618171       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:50:19.618300       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 08:50:19.618329       1 policy_source.go:240] refreshing policies
	I1123 08:50:19.618413       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 08:50:19.618505       1 aggregator.go:171] initial CRD sync complete...
	I1123 08:50:19.618518       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:50:19.618523       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:50:19.618528       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:50:19.619737       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:50:19.671500       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1123 08:50:19.723470       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:50:20.271819       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:50:21.464145       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:50:22.926002       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:50:23.164056       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:50:23.214895       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:50:23.276029       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [7868a019e831a2a996406b4d89aeecb6d8d06950ed992d7b8da380cb642f8763] <==
	W1123 08:50:03.862273       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862360       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862412       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862458       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862507       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862554       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862605       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862650       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862708       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862755       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862908       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862966       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863028       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863090       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863134       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863321       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863519       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863570       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863617       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863663       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863708       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863753       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863986       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.864067       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.864125       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [08bc69e75e392dc8dcaa5e734aca0453b6fa3505a2d07872309a1e92aa887ca2] <==
	I1123 08:49:17.413431       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:49:17.406998       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:49:17.408345       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:49:17.409611       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:49:17.406423       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:49:17.406820       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:49:17.406783       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-041000" podCIDRs=["10.244.0.0/24"]
	I1123 08:49:17.414580       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:49:17.414702       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:49:17.414718       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:49:17.414768       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:49:17.429173       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:49:17.429318       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:49:17.433226       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:49:17.453889       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:49:17.456709       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:49:17.459326       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:49:17.459406       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:49:17.459291       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:49:17.459267       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:49:17.462151       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:49:17.462266       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-041000"
	I1123 08:49:17.462331       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:49:17.479091       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:50:02.468790       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [3ad7b30765627018f51db658fa83ba644bdf45e1aba371d0fb38f45a91374bcd] <==
	I1123 08:50:22.890084       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:50:22.891604       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:50:22.893529       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:50:22.895891       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:50:22.898835       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:50:22.901581       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:50:22.904787       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:50:22.914304       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:50:22.914914       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:50:22.915029       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:50:22.915083       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:50:22.915111       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:50:22.915336       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:50:22.919117       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:50:22.920344       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:50:22.928209       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:50:22.930524       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:50:22.936207       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:50:22.949371       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:50:22.957067       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:50:22.957234       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:50:22.957333       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-041000"
	I1123 08:50:22.957406       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 08:50:22.970650       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:50:22.972831       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [014caef83785d9e215263867e2ad026cec906105ca6cef8f64db232c473788dc] <==
	I1123 08:49:18.686133       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:49:18.764976       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:49:18.868342       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:49:18.877834       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:49:18.877942       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:49:18.972431       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:49:18.972548       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:49:18.984288       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:49:18.984622       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:49:18.984763       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:49:18.986977       1 config.go:200] "Starting service config controller"
	I1123 08:49:18.987036       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:49:18.987092       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:49:18.987121       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:49:18.987165       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:49:18.987283       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:49:18.988011       1 config.go:309] "Starting node config controller"
	I1123 08:49:18.988062       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:49:18.988090       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:49:19.088632       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:49:19.088688       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:49:19.088424       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [72f2b8001c28bfca63c62972110a9ad36c4820ca3ef16e526e0a844ca887492c] <==
	I1123 08:50:16.191359       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:50:18.866067       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:50:19.776493       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:50:19.776608       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:50:19.776733       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:50:19.831650       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:50:19.831752       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:50:19.840686       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:50:19.841068       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:50:19.841086       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:50:19.846790       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:50:19.846875       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:50:19.847385       1 config.go:200] "Starting service config controller"
	I1123 08:50:19.847493       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:50:19.847819       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:50:19.847826       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:50:19.848170       1 config.go:309] "Starting node config controller"
	I1123 08:50:19.848180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:50:19.848186       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:50:19.947447       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:50:19.948611       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:50:19.948700       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0f27e4bf55b6041def1c3334330ebd093381737fe017224f64088735e7725ee3] <==
	I1123 08:50:16.815501       1 serving.go:386] Generated self-signed cert in-memory
	W1123 08:50:19.447572       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 08:50:19.447686       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:50:19.447721       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 08:50:19.447751       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 08:50:19.560763       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:50:19.560857       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:50:19.563100       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:50:19.571301       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:50:19.572335       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:50:19.572491       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:50:19.675424       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [0f807846d8d95db888a7aa3b3464682121926c4e8c8de2fb4a034403188441bc] <==
	E1123 08:49:11.427665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:49:11.429882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:49:11.430038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:49:11.430229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:49:11.430333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:49:11.430406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:49:11.430485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:49:11.430595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:49:11.430719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:49:11.430800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:49:11.431120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:49:11.431342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:49:11.431434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:49:11.431510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:49:11.431568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:49:11.431613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:49:11.431662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:49:11.431849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1123 08:49:13.014794       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:50:03.843839       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1123 08:50:03.843859       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1123 08:50:03.843879       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1123 08:50:03.843906       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:50:03.844125       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1123 08:50:03.844140       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 23 08:50:12 pause-041000 kubelet[1307]: I1123 08:50:12.362318    1307 scope.go:117] "RemoveContainer" containerID="9b1e6a2484dc967bcde959062922c706ed1adf1100c5f4604bf3898820f2b5ed"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.362948    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-p8fzx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="449bd814-b2c4-445c-8341-2d6fd4035f0e" pod="kube-system/coredns-66bc5c9577-p8fzx"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.363129    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="dc7adce6a209181bcf58f5abfc21fa44" pod="kube-system/kube-controller-manager-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.363504    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="7cc0754c87d623873a73f56f4f590f1e" pod="kube-system/kube-scheduler-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.363672    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a07005e87541b45ab343768ee149a754" pod="kube-system/etcd-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.363833    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="05758d343c56073bd323082828a7f2e9" pod="kube-system/kube-apiserver-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.363993    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-pzr9x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="26b37d37-77bd-4372-9d14-476cd4f1e851" pod="kube-system/kindnet-pzr9x"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: I1123 08:50:12.405693    1307 scope.go:117] "RemoveContainer" containerID="014caef83785d9e215263867e2ad026cec906105ca6cef8f64db232c473788dc"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.406125    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="7cc0754c87d623873a73f56f4f590f1e" pod="kube-system/kube-scheduler-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.406638    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a07005e87541b45ab343768ee149a754" pod="kube-system/etcd-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.406906    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="05758d343c56073bd323082828a7f2e9" pod="kube-system/kube-apiserver-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.407701    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzpjt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d43fb6ce-e107-46b2-9d52-19736141dc91" pod="kube-system/kube-proxy-jzpjt"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.408071    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-pzr9x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="26b37d37-77bd-4372-9d14-476cd4f1e851" pod="kube-system/kindnet-pzr9x"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.408464    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-p8fzx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="449bd814-b2c4-445c-8341-2d6fd4035f0e" pod="kube-system/coredns-66bc5c9577-p8fzx"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.408774    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="dc7adce6a209181bcf58f5abfc21fa44" pod="kube-system/kube-controller-manager-pause-041000"
	Nov 23 08:50:19 pause-041000 kubelet[1307]: E1123 08:50:19.312601    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-041000\" is forbidden: User \"system:node:pause-041000\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-041000' and this object" podUID="7cc0754c87d623873a73f56f4f590f1e" pod="kube-system/kube-scheduler-pause-041000"
	Nov 23 08:50:19 pause-041000 kubelet[1307]: E1123 08:50:19.312845    1307 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-041000\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-041000' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 23 08:50:19 pause-041000 kubelet[1307]: E1123 08:50:19.312956    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-041000\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-041000' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 23 08:50:19 pause-041000 kubelet[1307]: E1123 08:50:19.313427    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-041000\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-041000' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 23 08:50:19 pause-041000 kubelet[1307]: E1123 08:50:19.432310    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-041000\" is forbidden: User \"system:node:pause-041000\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-041000' and this object" podUID="a07005e87541b45ab343768ee149a754" pod="kube-system/etcd-pause-041000"
	Nov 23 08:50:19 pause-041000 kubelet[1307]: E1123 08:50:19.500425    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-041000\" is forbidden: User \"system:node:pause-041000\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-041000' and this object" podUID="05758d343c56073bd323082828a7f2e9" pod="kube-system/kube-apiserver-pause-041000"
	Nov 23 08:50:23 pause-041000 kubelet[1307]: W1123 08:50:23.317956    1307 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 23 08:50:29 pause-041000 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:50:29 pause-041000 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:50:29 pause-041000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-041000 -n pause-041000
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-041000 -n pause-041000: exit status 2 (350.187629ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-041000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-041000
helpers_test.go:243: (dbg) docker inspect pause-041000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6",
	        "Created": "2025-11-23T08:48:47.937244349Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1198486,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:48:48.017743348Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6/hostname",
	        "HostsPath": "/var/lib/docker/containers/716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6/hosts",
	        "LogPath": "/var/lib/docker/containers/716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6/716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6-json.log",
	        "Name": "/pause-041000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-041000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-041000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "716003843f34a16a4398b0f92f1ec30229f4baf9aa34e7fc071fd5e14eee9ea6",
	                "LowerDir": "/var/lib/docker/overlay2/d4286facfeda8026edb408d3a28ac48e26c98d7b6c4942287882529324f3c0af-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d4286facfeda8026edb408d3a28ac48e26c98d7b6c4942287882529324f3c0af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d4286facfeda8026edb408d3a28ac48e26c98d7b6c4942287882529324f3c0af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d4286facfeda8026edb408d3a28ac48e26c98d7b6c4942287882529324f3c0af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-041000",
	                "Source": "/var/lib/docker/volumes/pause-041000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-041000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-041000",
	                "name.minikube.sigs.k8s.io": "pause-041000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "397feafbbc91e0992a5b3a35dde420980d91399bd3e3d16903c0ebdcfe6a6800",
	            "SandboxKey": "/var/run/docker/netns/397feafbbc91",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34487"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34488"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34491"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34489"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34490"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-041000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:b9:c9:37:f1:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "67209089d6a2b97833ff4c62ee75196f13a8692732ca0bb2c519047a19d0d291",
	                    "EndpointID": "48c46a6f478adc038bf2f1178863b66c73263e18f0a8535868623c8f78e66069",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-041000",
	                        "716003843f34"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-041000 -n pause-041000
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-041000 -n pause-041000: exit status 2 (348.044843ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-041000 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-041000 logs -n 25: (1.376581671s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-293465 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p missing-upgrade-232904 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-232904    │ jenkins │ v1.32.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p NoKubernetes-293465 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p NoKubernetes-293465                                                                                                                   │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p NoKubernetes-293465 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ ssh     │ -p NoKubernetes-293465 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ stop    │ -p NoKubernetes-293465                                                                                                                   │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p missing-upgrade-232904 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-232904    │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p NoKubernetes-293465 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ ssh     │ -p NoKubernetes-293465 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │                     │
	│ delete  │ -p NoKubernetes-293465                                                                                                                   │ NoKubernetes-293465       │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p kubernetes-upgrade-354226 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-354226 │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p missing-upgrade-232904                                                                                                                │ missing-upgrade-232904    │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ stop    │ -p kubernetes-upgrade-354226                                                                                                             │ kubernetes-upgrade-354226 │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p kubernetes-upgrade-354226 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-354226 │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │                     │
	│ start   │ -p stopped-upgrade-885580 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-885580    │ jenkins │ v1.32.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:47 UTC │
	│ stop    │ stopped-upgrade-885580 stop                                                                                                              │ stopped-upgrade-885580    │ jenkins │ v1.32.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ start   │ -p stopped-upgrade-885580 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-885580    │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ delete  │ -p stopped-upgrade-885580                                                                                                                │ stopped-upgrade-885580    │ jenkins │ v1.37.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:47 UTC │
	│ start   │ -p running-upgrade-462653 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-462653    │ jenkins │ v1.32.0 │ 23 Nov 25 08:47 UTC │ 23 Nov 25 08:48 UTC │
	│ start   │ -p running-upgrade-462653 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-462653    │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:48 UTC │
	│ delete  │ -p running-upgrade-462653                                                                                                                │ running-upgrade-462653    │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:48 UTC │
	│ start   │ -p pause-041000 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-041000              │ jenkins │ v1.37.0 │ 23 Nov 25 08:48 UTC │ 23 Nov 25 08:50 UTC │
	│ start   │ -p pause-041000 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-041000              │ jenkins │ v1.37.0 │ 23 Nov 25 08:50 UTC │ 23 Nov 25 08:50 UTC │
	│ pause   │ -p pause-041000 --alsologtostderr -v=5                                                                                                   │ pause-041000              │ jenkins │ v1.37.0 │ 23 Nov 25 08:50 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:50:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:50:02.600742 1202718 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:50:02.600857 1202718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:50:02.600869 1202718 out.go:374] Setting ErrFile to fd 2...
	I1123 08:50:02.600875 1202718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:50:02.601192 1202718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:50:02.601616 1202718 out.go:368] Setting JSON to false
	I1123 08:50:02.602778 1202718 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34348,"bootTime":1763853455,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:50:02.602856 1202718 start.go:143] virtualization:  
	I1123 08:50:02.607376 1202718 out.go:179] * [pause-041000] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:50:02.610390 1202718 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:50:02.610597 1202718 notify.go:221] Checking for updates...
	I1123 08:50:02.614910 1202718 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:50:02.618297 1202718 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:50:02.621181 1202718 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:50:02.624070 1202718 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:50:02.627015 1202718 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:50:02.630777 1202718 config.go:182] Loaded profile config "pause-041000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:50:02.631562 1202718 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:50:02.669196 1202718 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:50:02.669313 1202718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:50:02.741225 1202718 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 08:50:02.732177326 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:50:02.741335 1202718 docker.go:319] overlay module found
	I1123 08:50:02.744614 1202718 out.go:179] * Using the docker driver based on existing profile
	I1123 08:50:02.747553 1202718 start.go:309] selected driver: docker
	I1123 08:50:02.747583 1202718 start.go:927] validating driver "docker" against &{Name:pause-041000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-041000 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:50:02.747728 1202718 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:50:02.747829 1202718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:50:02.807770 1202718 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 08:50:02.798402351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:50:02.808156 1202718 cni.go:84] Creating CNI manager for ""
	I1123 08:50:02.808227 1202718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:50:02.808270 1202718 start.go:353] cluster config:
	{Name:pause-041000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-041000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:50:02.811593 1202718 out.go:179] * Starting "pause-041000" primary control-plane node in "pause-041000" cluster
	I1123 08:50:02.814450 1202718 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:50:02.817403 1202718 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:50:02.820243 1202718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:50:02.820292 1202718 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 08:50:02.820306 1202718 cache.go:65] Caching tarball of preloaded images
	I1123 08:50:02.820317 1202718 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:50:02.820388 1202718 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:50:02.820398 1202718 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:50:02.820525 1202718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/config.json ...
	I1123 08:50:02.839117 1202718 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:50:02.839139 1202718 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:50:02.839159 1202718 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:50:02.839254 1202718 start.go:360] acquireMachinesLock for pause-041000: {Name:mk607c5ec25c4c2ac4976ceaf5f6a6abdbe1e557 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:50:02.839330 1202718 start.go:364] duration metric: took 49.041µs to acquireMachinesLock for "pause-041000"
	I1123 08:50:02.839353 1202718 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:50:02.839361 1202718 fix.go:54] fixHost starting: 
	I1123 08:50:02.839623 1202718 cli_runner.go:164] Run: docker container inspect pause-041000 --format={{.State.Status}}
	I1123 08:50:02.856406 1202718 fix.go:112] recreateIfNeeded on pause-041000: state=Running err=<nil>
	W1123 08:50:02.856446 1202718 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:50:01.910312 1187534 logs.go:123] Gathering logs for kube-scheduler [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37] ...
	I1123 08:50:01.910350 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:01.976368 1187534 logs.go:123] Gathering logs for kube-controller-manager [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417] ...
	I1123 08:50:01.976409 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:04.510296 1187534 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:50:04.510848 1187534 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:50:04.510909 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:50:04.510981 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:50:04.547829 1187534 cri.go:89] found id: "39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:04.547858 1187534 cri.go:89] found id: ""
	I1123 08:50:04.547867 1187534 logs.go:282] 1 containers: [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387]
	I1123 08:50:04.547922 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:04.552079 1187534 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 08:50:04.552153 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:50:04.580253 1187534 cri.go:89] found id: ""
	I1123 08:50:04.580278 1187534 logs.go:282] 0 containers: []
	W1123 08:50:04.580287 1187534 logs.go:284] No container was found matching "etcd"
	I1123 08:50:04.580293 1187534 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 08:50:04.580351 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:50:04.605349 1187534 cri.go:89] found id: ""
	I1123 08:50:04.605375 1187534 logs.go:282] 0 containers: []
	W1123 08:50:04.605384 1187534 logs.go:284] No container was found matching "coredns"
	I1123 08:50:04.605395 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:50:04.605461 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:50:04.631584 1187534 cri.go:89] found id: "ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:04.631608 1187534 cri.go:89] found id: ""
	I1123 08:50:04.631617 1187534 logs.go:282] 1 containers: [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37]
	I1123 08:50:04.631676 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:04.635259 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:50:04.635333 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:50:04.662370 1187534 cri.go:89] found id: ""
	I1123 08:50:04.662438 1187534 logs.go:282] 0 containers: []
	W1123 08:50:04.662450 1187534 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:50:04.662457 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:50:04.662546 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:50:04.689211 1187534 cri.go:89] found id: "82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:04.689231 1187534 cri.go:89] found id: ""
	I1123 08:50:04.689238 1187534 logs.go:282] 1 containers: [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417]
	I1123 08:50:04.689295 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:04.695376 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 08:50:04.695444 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:50:04.725844 1187534 cri.go:89] found id: ""
	I1123 08:50:04.725910 1187534 logs.go:282] 0 containers: []
	W1123 08:50:04.725934 1187534 logs.go:284] No container was found matching "kindnet"
	I1123 08:50:04.725955 1187534 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:50:04.726026 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:50:04.752803 1187534 cri.go:89] found id: ""
	I1123 08:50:04.752829 1187534 logs.go:282] 0 containers: []
	W1123 08:50:04.752837 1187534 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:50:04.752846 1187534 logs.go:123] Gathering logs for CRI-O ...
	I1123 08:50:04.752857 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 08:50:04.807802 1187534 logs.go:123] Gathering logs for container status ...
	I1123 08:50:04.807838 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:50:04.837479 1187534 logs.go:123] Gathering logs for kubelet ...
	I1123 08:50:04.837508 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:50:04.953068 1187534 logs.go:123] Gathering logs for dmesg ...
	I1123 08:50:04.953105 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:50:04.973388 1187534 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:50:04.973418 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:50:05.048855 1187534 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:50:05.048876 1187534 logs.go:123] Gathering logs for kube-apiserver [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387] ...
	I1123 08:50:05.048890 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:05.081235 1187534 logs.go:123] Gathering logs for kube-scheduler [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37] ...
	I1123 08:50:05.081269 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:05.140155 1187534 logs.go:123] Gathering logs for kube-controller-manager [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417] ...
	I1123 08:50:05.140192 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:02.859706 1202718 out.go:252] * Updating the running docker "pause-041000" container ...
	I1123 08:50:02.859760 1202718 machine.go:94] provisionDockerMachine start ...
	I1123 08:50:02.859900 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:02.878321 1202718 main.go:143] libmachine: Using SSH client type: native
	I1123 08:50:02.878655 1202718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34487 <nil> <nil>}
	I1123 08:50:02.878671 1202718 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:50:03.030841 1202718 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-041000
	
	I1123 08:50:03.030935 1202718 ubuntu.go:182] provisioning hostname "pause-041000"
	I1123 08:50:03.031030 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:03.049259 1202718 main.go:143] libmachine: Using SSH client type: native
	I1123 08:50:03.049590 1202718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34487 <nil> <nil>}
	I1123 08:50:03.049605 1202718 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-041000 && echo "pause-041000" | sudo tee /etc/hostname
	I1123 08:50:03.212903 1202718 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-041000
	
	I1123 08:50:03.213004 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:03.233384 1202718 main.go:143] libmachine: Using SSH client type: native
	I1123 08:50:03.233720 1202718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34487 <nil> <nil>}
	I1123 08:50:03.233743 1202718 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-041000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-041000/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-041000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:50:03.383588 1202718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:50:03.383613 1202718 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:50:03.383641 1202718 ubuntu.go:190] setting up certificates
	I1123 08:50:03.383650 1202718 provision.go:84] configureAuth start
	I1123 08:50:03.383708 1202718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-041000
	I1123 08:50:03.400794 1202718 provision.go:143] copyHostCerts
	I1123 08:50:03.400864 1202718 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:50:03.400878 1202718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:50:03.400956 1202718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:50:03.401057 1202718 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:50:03.401063 1202718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:50:03.401089 1202718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:50:03.401180 1202718 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:50:03.401185 1202718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:50:03.401209 1202718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:50:03.401254 1202718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.pause-041000 san=[127.0.0.1 192.168.85.2 localhost minikube pause-041000]
	I1123 08:50:03.463281 1202718 provision.go:177] copyRemoteCerts
	I1123 08:50:03.463389 1202718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:50:03.463438 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:03.483693 1202718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34487 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/pause-041000/id_rsa Username:docker}
	I1123 08:50:03.586812 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:50:03.604225 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:50:03.629674 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1123 08:50:03.652495 1202718 provision.go:87] duration metric: took 268.822644ms to configureAuth
	I1123 08:50:03.652528 1202718 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:50:03.652755 1202718 config.go:182] Loaded profile config "pause-041000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:50:03.652868 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:03.670309 1202718 main.go:143] libmachine: Using SSH client type: native
	I1123 08:50:03.670632 1202718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34487 <nil> <nil>}
	I1123 08:50:03.670651 1202718 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:50:09.054698 1202718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:50:09.054722 1202718 machine.go:97] duration metric: took 6.194953541s to provisionDockerMachine
	I1123 08:50:09.054733 1202718 start.go:293] postStartSetup for "pause-041000" (driver="docker")
	I1123 08:50:09.054744 1202718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:50:09.054821 1202718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:50:09.054867 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:09.072544 1202718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34487 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/pause-041000/id_rsa Username:docker}
	I1123 08:50:09.174967 1202718 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:50:09.178258 1202718 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:50:09.178284 1202718 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:50:09.178295 1202718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:50:09.178345 1202718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:50:09.178423 1202718 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:50:09.178537 1202718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:50:09.185618 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:50:09.203435 1202718 start.go:296] duration metric: took 148.687196ms for postStartSetup
	I1123 08:50:09.203558 1202718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:50:09.203624 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:09.220265 1202718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34487 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/pause-041000/id_rsa Username:docker}
	I1123 08:50:09.324308 1202718 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:50:09.329001 1202718 fix.go:56] duration metric: took 6.489633581s for fixHost
	I1123 08:50:09.329026 1202718 start.go:83] releasing machines lock for "pause-041000", held for 6.48968322s
	I1123 08:50:09.329100 1202718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-041000
	I1123 08:50:09.346016 1202718 ssh_runner.go:195] Run: cat /version.json
	I1123 08:50:09.346079 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:09.346365 1202718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:50:09.346419 1202718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-041000
	I1123 08:50:09.363478 1202718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34487 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/pause-041000/id_rsa Username:docker}
	I1123 08:50:09.377392 1202718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34487 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/pause-041000/id_rsa Username:docker}
	I1123 08:50:09.552888 1202718 ssh_runner.go:195] Run: systemctl --version
	I1123 08:50:09.559415 1202718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:50:09.606987 1202718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:50:09.612301 1202718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:50:09.612395 1202718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:50:09.620593 1202718 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:50:09.620626 1202718 start.go:496] detecting cgroup driver to use...
	I1123 08:50:09.620668 1202718 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:50:09.620736 1202718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:50:09.637073 1202718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:50:09.650918 1202718 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:50:09.650988 1202718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:50:09.668351 1202718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:50:09.682624 1202718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:50:09.824310 1202718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:50:09.966950 1202718 docker.go:234] disabling docker service ...
	I1123 08:50:09.967075 1202718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:50:09.982014 1202718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:50:09.994750 1202718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:50:10.138241 1202718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:50:10.281483 1202718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:50:10.294662 1202718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:50:10.308807 1202718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:50:10.308888 1202718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.318107 1202718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:50:10.318173 1202718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.327451 1202718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.336416 1202718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.345267 1202718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:50:10.353320 1202718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.362146 1202718 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.370790 1202718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:50:10.380268 1202718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:50:10.388219 1202718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:50:10.395758 1202718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:50:10.534792 1202718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:50:10.759290 1202718 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:50:10.759364 1202718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:50:10.763575 1202718 start.go:564] Will wait 60s for crictl version
	I1123 08:50:10.763684 1202718 ssh_runner.go:195] Run: which crictl
	I1123 08:50:10.767361 1202718 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:50:10.797273 1202718 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:50:10.797442 1202718 ssh_runner.go:195] Run: crio --version
	I1123 08:50:10.844739 1202718 ssh_runner.go:195] Run: crio --version
	I1123 08:50:10.884358 1202718 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:50:07.667598 1187534 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:50:07.668080 1187534 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:50:07.668127 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:50:07.668181 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:50:07.698967 1187534 cri.go:89] found id: "39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:07.698990 1187534 cri.go:89] found id: ""
	I1123 08:50:07.698999 1187534 logs.go:282] 1 containers: [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387]
	I1123 08:50:07.699055 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:07.702736 1187534 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 08:50:07.702827 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:50:07.730675 1187534 cri.go:89] found id: ""
	I1123 08:50:07.730698 1187534 logs.go:282] 0 containers: []
	W1123 08:50:07.730706 1187534 logs.go:284] No container was found matching "etcd"
	I1123 08:50:07.730714 1187534 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 08:50:07.730798 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:50:07.760933 1187534 cri.go:89] found id: ""
	I1123 08:50:07.760959 1187534 logs.go:282] 0 containers: []
	W1123 08:50:07.760967 1187534 logs.go:284] No container was found matching "coredns"
	I1123 08:50:07.760976 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:50:07.761038 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:50:07.787584 1187534 cri.go:89] found id: "ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:07.787608 1187534 cri.go:89] found id: ""
	I1123 08:50:07.787616 1187534 logs.go:282] 1 containers: [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37]
	I1123 08:50:07.787674 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:07.791296 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:50:07.791367 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:50:07.816321 1187534 cri.go:89] found id: ""
	I1123 08:50:07.816344 1187534 logs.go:282] 0 containers: []
	W1123 08:50:07.816352 1187534 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:50:07.816358 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:50:07.816417 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:50:07.846824 1187534 cri.go:89] found id: "82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:07.846848 1187534 cri.go:89] found id: ""
	I1123 08:50:07.846856 1187534 logs.go:282] 1 containers: [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417]
	I1123 08:50:07.846912 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:07.850535 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 08:50:07.850613 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:50:07.876504 1187534 cri.go:89] found id: ""
	I1123 08:50:07.876528 1187534 logs.go:282] 0 containers: []
	W1123 08:50:07.876537 1187534 logs.go:284] No container was found matching "kindnet"
	I1123 08:50:07.876543 1187534 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:50:07.876619 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:50:07.903205 1187534 cri.go:89] found id: ""
	I1123 08:50:07.903238 1187534 logs.go:282] 0 containers: []
	W1123 08:50:07.903247 1187534 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:50:07.903275 1187534 logs.go:123] Gathering logs for kube-controller-manager [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417] ...
	I1123 08:50:07.903299 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:07.927840 1187534 logs.go:123] Gathering logs for CRI-O ...
	I1123 08:50:07.927867 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 08:50:07.984041 1187534 logs.go:123] Gathering logs for container status ...
	I1123 08:50:07.984077 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:50:08.014822 1187534 logs.go:123] Gathering logs for kubelet ...
	I1123 08:50:08.014851 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:50:08.129119 1187534 logs.go:123] Gathering logs for dmesg ...
	I1123 08:50:08.129154 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:50:08.147725 1187534 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:50:08.147759 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:50:08.214394 1187534 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:50:08.214415 1187534 logs.go:123] Gathering logs for kube-apiserver [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387] ...
	I1123 08:50:08.214436 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:08.250995 1187534 logs.go:123] Gathering logs for kube-scheduler [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37] ...
	I1123 08:50:08.251025 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:10.809133 1187534 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:50:10.809510 1187534 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:50:10.809559 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:50:10.809617 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:50:10.846334 1187534 cri.go:89] found id: "39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:10.846357 1187534 cri.go:89] found id: ""
	I1123 08:50:10.846365 1187534 logs.go:282] 1 containers: [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387]
	I1123 08:50:10.846418 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:10.850664 1187534 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 08:50:10.850740 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:50:10.884139 1187534 cri.go:89] found id: ""
	I1123 08:50:10.884160 1187534 logs.go:282] 0 containers: []
	W1123 08:50:10.884168 1187534 logs.go:284] No container was found matching "etcd"
	I1123 08:50:10.884177 1187534 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 08:50:10.884236 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:50:10.930157 1187534 cri.go:89] found id: ""
	I1123 08:50:10.930181 1187534 logs.go:282] 0 containers: []
	W1123 08:50:10.930190 1187534 logs.go:284] No container was found matching "coredns"
	I1123 08:50:10.930197 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:50:10.930257 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:50:10.971456 1187534 cri.go:89] found id: "ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:10.971476 1187534 cri.go:89] found id: ""
	I1123 08:50:10.971483 1187534 logs.go:282] 1 containers: [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37]
	I1123 08:50:10.971541 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:10.975608 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:50:10.975681 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:50:11.023586 1187534 cri.go:89] found id: ""
	I1123 08:50:11.023607 1187534 logs.go:282] 0 containers: []
	W1123 08:50:11.023616 1187534 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:50:11.023623 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:50:11.023684 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:50:11.075063 1187534 cri.go:89] found id: "82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:11.075083 1187534 cri.go:89] found id: ""
	I1123 08:50:11.075091 1187534 logs.go:282] 1 containers: [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417]
	I1123 08:50:11.075146 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:11.079349 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 08:50:11.079419 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:50:11.123128 1187534 cri.go:89] found id: ""
	I1123 08:50:11.123149 1187534 logs.go:282] 0 containers: []
	W1123 08:50:11.123158 1187534 logs.go:284] No container was found matching "kindnet"
	I1123 08:50:11.123165 1187534 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:50:11.123266 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:50:11.164466 1187534 cri.go:89] found id: ""
	I1123 08:50:11.164488 1187534 logs.go:282] 0 containers: []
	W1123 08:50:11.164497 1187534 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:50:11.164505 1187534 logs.go:123] Gathering logs for CRI-O ...
	I1123 08:50:11.164528 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 08:50:11.240514 1187534 logs.go:123] Gathering logs for container status ...
	I1123 08:50:11.240595 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:50:11.285980 1187534 logs.go:123] Gathering logs for kubelet ...
	I1123 08:50:11.286055 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:50:11.412788 1187534 logs.go:123] Gathering logs for dmesg ...
	I1123 08:50:11.412843 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:50:11.430659 1187534 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:50:11.430792 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:50:11.519194 1187534 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:50:11.519272 1187534 logs.go:123] Gathering logs for kube-apiserver [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387] ...
	I1123 08:50:11.519300 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:11.561272 1187534 logs.go:123] Gathering logs for kube-scheduler [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37] ...
	I1123 08:50:11.561572 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:11.631461 1187534 logs.go:123] Gathering logs for kube-controller-manager [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417] ...
	I1123 08:50:11.631513 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:10.887564 1202718 cli_runner.go:164] Run: docker network inspect pause-041000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:50:10.905887 1202718 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:50:10.910554 1202718 kubeadm.go:884] updating cluster {Name:pause-041000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-041000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:50:10.910702 1202718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:50:10.910753 1202718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:50:10.952072 1202718 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:50:10.952091 1202718 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:50:10.952147 1202718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:50:10.984995 1202718 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:50:10.985014 1202718 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:50:10.985021 1202718 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 08:50:10.985130 1202718 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-041000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-041000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:50:10.985204 1202718 ssh_runner.go:195] Run: crio config
	I1123 08:50:11.060189 1202718 cni.go:84] Creating CNI manager for ""
	I1123 08:50:11.060260 1202718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:50:11.060298 1202718 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:50:11.060353 1202718 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-041000 NodeName:pause-041000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:50:11.060526 1202718 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-041000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:50:11.060634 1202718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:50:11.070674 1202718 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:50:11.070796 1202718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:50:11.083158 1202718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1123 08:50:11.101587 1202718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:50:11.119076 1202718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1123 08:50:11.137322 1202718 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:50:11.141917 1202718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:50:11.338814 1202718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:50:11.354216 1202718 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000 for IP: 192.168.85.2
	I1123 08:50:11.354237 1202718 certs.go:195] generating shared ca certs ...
	I1123 08:50:11.354254 1202718 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:50:11.354380 1202718 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:50:11.354438 1202718 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:50:11.354455 1202718 certs.go:257] generating profile certs ...
	I1123 08:50:11.354544 1202718 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/client.key
	I1123 08:50:11.354612 1202718 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/apiserver.key.6d8251ec
	I1123 08:50:11.354654 1202718 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/proxy-client.key
	I1123 08:50:11.354767 1202718 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:50:11.354801 1202718 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:50:11.354814 1202718 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:50:11.354842 1202718 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:50:11.354875 1202718 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:50:11.354902 1202718 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:50:11.354949 1202718 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:50:11.355573 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:50:11.378587 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:50:11.400527 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:50:11.423761 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:50:11.446154 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 08:50:11.469247 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:50:11.489147 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:50:11.508933 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1123 08:50:11.547418 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:50:11.575101 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:50:11.609402 1202718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 08:50:11.630628 1202718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:50:11.647973 1202718 ssh_runner.go:195] Run: openssl version
	I1123 08:50:11.654688 1202718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:50:11.663398 1202718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:50:11.669734 1202718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:50:11.669798 1202718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:50:11.712474 1202718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:50:11.720366 1202718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:50:11.728600 1202718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:50:11.732183 1202718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:50:11.732280 1202718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:50:11.773019 1202718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:50:11.780768 1202718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:50:11.788755 1202718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:50:11.792354 1202718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:50:11.792418 1202718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:50:11.833257 1202718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:50:11.841204 1202718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:50:11.844824 1202718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:50:11.885657 1202718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:50:11.926621 1202718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:50:11.967474 1202718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:50:12.012380 1202718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:50:12.055124 1202718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:50:12.096711 1202718 kubeadm.go:401] StartCluster: {Name:pause-041000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-041000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:50:12.096832 1202718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:50:12.096898 1202718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:50:12.124886 1202718 cri.go:89] found id: "45dc8732cc188eafe78085325272d3d984a7f717166cd0187bd52e340fe5512f"
	I1123 08:50:12.124907 1202718 cri.go:89] found id: "9b1e6a2484dc967bcde959062922c706ed1adf1100c5f4604bf3898820f2b5ed"
	I1123 08:50:12.124912 1202718 cri.go:89] found id: "014caef83785d9e215263867e2ad026cec906105ca6cef8f64db232c473788dc"
	I1123 08:50:12.124915 1202718 cri.go:89] found id: "7868a019e831a2a996406b4d89aeecb6d8d06950ed992d7b8da380cb642f8763"
	I1123 08:50:12.124919 1202718 cri.go:89] found id: "0f807846d8d95db888a7aa3b3464682121926c4e8c8de2fb4a034403188441bc"
	I1123 08:50:12.124922 1202718 cri.go:89] found id: "08bc69e75e392dc8dcaa5e734aca0453b6fa3505a2d07872309a1e92aa887ca2"
	I1123 08:50:12.124926 1202718 cri.go:89] found id: "daa63f34218f6ccf5f1c984a5efedf997465def664bd74e9558bce3ec2095793"
	I1123 08:50:12.124963 1202718 cri.go:89] found id: ""
	I1123 08:50:12.125023 1202718 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 08:50:12.136286 1202718 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:50:12Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:50:12.136364 1202718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:50:12.144016 1202718 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:50:12.144037 1202718 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:50:12.144107 1202718 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:50:12.151160 1202718 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:50:12.151842 1202718 kubeconfig.go:125] found "pause-041000" server: "https://192.168.85.2:8443"
	I1123 08:50:12.152627 1202718 kapi.go:59] client config for pause-041000: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/client.crt", KeyFile:"/home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/client.key", CAFile:"/home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 08:50:12.153122 1202718 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 08:50:12.153142 1202718 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 08:50:12.153148 1202718 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 08:50:12.153155 1202718 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 08:50:12.153165 1202718 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 08:50:12.153428 1202718 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:50:12.160964 1202718 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 08:50:12.161061 1202718 kubeadm.go:602] duration metric: took 17.010572ms to restartPrimaryControlPlane
	I1123 08:50:12.161079 1202718 kubeadm.go:403] duration metric: took 64.377103ms to StartCluster
	I1123 08:50:12.161095 1202718 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:50:12.161167 1202718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:50:12.162027 1202718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:50:12.162280 1202718 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:50:12.162715 1202718 config.go:182] Loaded profile config "pause-041000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:50:12.162768 1202718 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:50:12.166499 1202718 out.go:179] * Verifying Kubernetes components...
	I1123 08:50:12.166500 1202718 out.go:179] * Enabled addons: 
	I1123 08:50:12.169366 1202718 addons.go:530] duration metric: took 6.601147ms for enable addons: enabled=[]
	I1123 08:50:12.169453 1202718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:50:12.327734 1202718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:50:12.344308 1202718 node_ready.go:35] waiting up to 6m0s for node "pause-041000" to be "Ready" ...
	I1123 08:50:14.168647 1187534 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:50:14.169003 1187534 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1123 08:50:14.169051 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:50:14.169132 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:50:14.230306 1187534 cri.go:89] found id: "39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:14.230330 1187534 cri.go:89] found id: ""
	I1123 08:50:14.230338 1187534 logs.go:282] 1 containers: [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387]
	I1123 08:50:14.230394 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:14.239126 1187534 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 08:50:14.239212 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:50:14.298328 1187534 cri.go:89] found id: ""
	I1123 08:50:14.298354 1187534 logs.go:282] 0 containers: []
	W1123 08:50:14.298364 1187534 logs.go:284] No container was found matching "etcd"
	I1123 08:50:14.298378 1187534 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 08:50:14.298610 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:50:14.351914 1187534 cri.go:89] found id: ""
	I1123 08:50:14.351948 1187534 logs.go:282] 0 containers: []
	W1123 08:50:14.351956 1187534 logs.go:284] No container was found matching "coredns"
	I1123 08:50:14.351964 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:50:14.352059 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:50:14.386478 1187534 cri.go:89] found id: "ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:14.386516 1187534 cri.go:89] found id: ""
	I1123 08:50:14.386524 1187534 logs.go:282] 1 containers: [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37]
	I1123 08:50:14.386591 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:14.390064 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:50:14.390142 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:50:14.437347 1187534 cri.go:89] found id: ""
	I1123 08:50:14.437374 1187534 logs.go:282] 0 containers: []
	W1123 08:50:14.437391 1187534 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:50:14.437397 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:50:14.437469 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:50:14.493834 1187534 cri.go:89] found id: "82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:14.493860 1187534 cri.go:89] found id: ""
	I1123 08:50:14.493868 1187534 logs.go:282] 1 containers: [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417]
	I1123 08:50:14.493934 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:14.503076 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 08:50:14.503166 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:50:14.555430 1187534 cri.go:89] found id: ""
	I1123 08:50:14.555470 1187534 logs.go:282] 0 containers: []
	W1123 08:50:14.555480 1187534 logs.go:284] No container was found matching "kindnet"
	I1123 08:50:14.555488 1187534 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:50:14.555559 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:50:14.609105 1187534 cri.go:89] found id: ""
	I1123 08:50:14.609132 1187534 logs.go:282] 0 containers: []
	W1123 08:50:14.609159 1187534 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:50:14.609168 1187534 logs.go:123] Gathering logs for dmesg ...
	I1123 08:50:14.609183 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:50:14.637863 1187534 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:50:14.637904 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 08:50:14.766527 1187534 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 08:50:14.766558 1187534 logs.go:123] Gathering logs for kube-apiserver [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387] ...
	I1123 08:50:14.766573 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:14.821424 1187534 logs.go:123] Gathering logs for kube-scheduler [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37] ...
	I1123 08:50:14.821456 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:14.917613 1187534 logs.go:123] Gathering logs for kube-controller-manager [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417] ...
	I1123 08:50:14.917697 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:14.983379 1187534 logs.go:123] Gathering logs for CRI-O ...
	I1123 08:50:14.983404 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 08:50:15.064735 1187534 logs.go:123] Gathering logs for container status ...
	I1123 08:50:15.064814 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:50:15.131449 1187534 logs.go:123] Gathering logs for kubelet ...
	I1123 08:50:15.131529 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:50:17.776295 1187534 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:50:19.512201 1202718 node_ready.go:49] node "pause-041000" is "Ready"
	I1123 08:50:19.512232 1202718 node_ready.go:38] duration metric: took 7.167870988s for node "pause-041000" to be "Ready" ...
	I1123 08:50:19.512245 1202718 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:50:19.512302 1202718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:50:19.528916 1202718 api_server.go:72] duration metric: took 7.366598415s to wait for apiserver process to appear ...
	I1123 08:50:19.528941 1202718 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:50:19.528959 1202718 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:50:19.656689 1202718 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:50:19.656787 1202718 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:50:20.029057 1202718 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:50:20.039141 1202718 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:50:20.039353 1202718 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:50:20.529660 1202718 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:50:20.537742 1202718 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 08:50:20.538843 1202718 api_server.go:141] control plane version: v1.34.1
	I1123 08:50:20.538868 1202718 api_server.go:131] duration metric: took 1.009920448s to wait for apiserver health ...
	I1123 08:50:20.538877 1202718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:50:20.542229 1202718 system_pods.go:59] 7 kube-system pods found
	I1123 08:50:20.542267 1202718 system_pods.go:61] "coredns-66bc5c9577-p8fzx" [449bd814-b2c4-445c-8341-2d6fd4035f0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:50:20.542279 1202718 system_pods.go:61] "etcd-pause-041000" [686e9429-7f08-4647-b76a-ea1509e228e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:50:20.542285 1202718 system_pods.go:61] "kindnet-pzr9x" [26b37d37-77bd-4372-9d14-476cd4f1e851] Running
	I1123 08:50:20.542291 1202718 system_pods.go:61] "kube-apiserver-pause-041000" [22756ad9-71b8-43a4-ae5d-d736b1925a32] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:50:20.542303 1202718 system_pods.go:61] "kube-controller-manager-pause-041000" [a677a284-6f9d-4054-99b8-ce3ec472d3d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:50:20.542310 1202718 system_pods.go:61] "kube-proxy-jzpjt" [d43fb6ce-e107-46b2-9d52-19736141dc91] Running
	I1123 08:50:20.542319 1202718 system_pods.go:61] "kube-scheduler-pause-041000" [e3f5cb4f-2087-4f60-bcfa-8dc8a2f6a21c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:50:20.542329 1202718 system_pods.go:74] duration metric: took 3.445841ms to wait for pod list to return data ...
	I1123 08:50:20.542339 1202718 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:50:20.544825 1202718 default_sa.go:45] found service account: "default"
	I1123 08:50:20.544851 1202718 default_sa.go:55] duration metric: took 2.503343ms for default service account to be created ...
	I1123 08:50:20.544861 1202718 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:50:20.547535 1202718 system_pods.go:86] 7 kube-system pods found
	I1123 08:50:20.547567 1202718 system_pods.go:89] "coredns-66bc5c9577-p8fzx" [449bd814-b2c4-445c-8341-2d6fd4035f0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:50:20.547577 1202718 system_pods.go:89] "etcd-pause-041000" [686e9429-7f08-4647-b76a-ea1509e228e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:50:20.547584 1202718 system_pods.go:89] "kindnet-pzr9x" [26b37d37-77bd-4372-9d14-476cd4f1e851] Running
	I1123 08:50:20.547590 1202718 system_pods.go:89] "kube-apiserver-pause-041000" [22756ad9-71b8-43a4-ae5d-d736b1925a32] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:50:20.547598 1202718 system_pods.go:89] "kube-controller-manager-pause-041000" [a677a284-6f9d-4054-99b8-ce3ec472d3d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:50:20.547606 1202718 system_pods.go:89] "kube-proxy-jzpjt" [d43fb6ce-e107-46b2-9d52-19736141dc91] Running
	I1123 08:50:20.547616 1202718 system_pods.go:89] "kube-scheduler-pause-041000" [e3f5cb4f-2087-4f60-bcfa-8dc8a2f6a21c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:50:20.547623 1202718 system_pods.go:126] duration metric: took 2.756129ms to wait for k8s-apps to be running ...
	I1123 08:50:20.547635 1202718 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:50:20.547692 1202718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:50:20.561197 1202718 system_svc.go:56] duration metric: took 13.553464ms WaitForService to wait for kubelet
	I1123 08:50:20.561224 1202718 kubeadm.go:587] duration metric: took 8.398912422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:50:20.561240 1202718 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:50:20.564454 1202718 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:50:20.564485 1202718 node_conditions.go:123] node cpu capacity is 2
	I1123 08:50:20.564499 1202718 node_conditions.go:105] duration metric: took 3.252821ms to run NodePressure ...
	I1123 08:50:20.564511 1202718 start.go:242] waiting for startup goroutines ...
	I1123 08:50:20.564518 1202718 start.go:247] waiting for cluster config update ...
	I1123 08:50:20.564532 1202718 start.go:256] writing updated cluster config ...
	I1123 08:50:20.564846 1202718 ssh_runner.go:195] Run: rm -f paused
	I1123 08:50:20.568495 1202718 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:50:20.569112 1202718 kapi.go:59] client config for pause-041000: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/client.crt", KeyFile:"/home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/pause-041000/client.key", CAFile:"/home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 08:50:20.572578 1202718 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p8fzx" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:50:22.578660 1202718 pod_ready.go:104] pod "coredns-66bc5c9577-p8fzx" is not "Ready", error: <nil>
	I1123 08:50:22.776905 1187534 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1123 08:50:22.776972 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 08:50:22.777039 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 08:50:22.814040 1187534 cri.go:89] found id: "5abe9dd9f1f9662dd8f041afce9a5d1e1922dff28712b78a4a42382a6249645b"
	I1123 08:50:22.814058 1187534 cri.go:89] found id: "39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:22.814063 1187534 cri.go:89] found id: ""
	I1123 08:50:22.814070 1187534 logs.go:282] 2 containers: [5abe9dd9f1f9662dd8f041afce9a5d1e1922dff28712b78a4a42382a6249645b 39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387]
	I1123 08:50:22.814135 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:22.818324 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:22.822093 1187534 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 08:50:22.822166 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 08:50:22.853928 1187534 cri.go:89] found id: ""
	I1123 08:50:22.853954 1187534 logs.go:282] 0 containers: []
	W1123 08:50:22.853963 1187534 logs.go:284] No container was found matching "etcd"
	I1123 08:50:22.853970 1187534 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 08:50:22.854031 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 08:50:22.896180 1187534 cri.go:89] found id: ""
	I1123 08:50:22.896208 1187534 logs.go:282] 0 containers: []
	W1123 08:50:22.896217 1187534 logs.go:284] No container was found matching "coredns"
	I1123 08:50:22.896223 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 08:50:22.896281 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 08:50:22.940549 1187534 cri.go:89] found id: "ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:22.940575 1187534 cri.go:89] found id: ""
	I1123 08:50:22.940583 1187534 logs.go:282] 1 containers: [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37]
	I1123 08:50:22.940651 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:22.944486 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 08:50:22.944555 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 08:50:22.974972 1187534 cri.go:89] found id: ""
	I1123 08:50:22.974999 1187534 logs.go:282] 0 containers: []
	W1123 08:50:22.975008 1187534 logs.go:284] No container was found matching "kube-proxy"
	I1123 08:50:22.975015 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 08:50:22.975074 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 08:50:23.006880 1187534 cri.go:89] found id: "82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:23.006906 1187534 cri.go:89] found id: ""
	I1123 08:50:23.006914 1187534 logs.go:282] 1 containers: [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417]
	I1123 08:50:23.006976 1187534 ssh_runner.go:195] Run: which crictl
	I1123 08:50:23.010823 1187534 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 08:50:23.010898 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 08:50:23.038139 1187534 cri.go:89] found id: ""
	I1123 08:50:23.038161 1187534 logs.go:282] 0 containers: []
	W1123 08:50:23.038170 1187534 logs.go:284] No container was found matching "kindnet"
	I1123 08:50:23.038176 1187534 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 08:50:23.038235 1187534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 08:50:23.083217 1187534 cri.go:89] found id: ""
	I1123 08:50:23.083238 1187534 logs.go:282] 0 containers: []
	W1123 08:50:23.083246 1187534 logs.go:284] No container was found matching "storage-provisioner"
	I1123 08:50:23.083260 1187534 logs.go:123] Gathering logs for kube-apiserver [39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387] ...
	I1123 08:50:23.083281 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 39424203732ace3ffd26d3a1102986e5ff18f2a8d1d19c59723439c373d6e387"
	I1123 08:50:23.127751 1187534 logs.go:123] Gathering logs for kube-scheduler [ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37] ...
	I1123 08:50:23.127789 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef352add72d1b5d122a1acdd8ce9a349090ac17f15deacdcfa4d0521a90eda37"
	I1123 08:50:23.195977 1187534 logs.go:123] Gathering logs for kube-controller-manager [82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417] ...
	I1123 08:50:23.196013 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 82b9e0e4e8b59ab73bc8af03372c379796a11ac9b29f13d3908960cdcb0be417"
	I1123 08:50:23.229217 1187534 logs.go:123] Gathering logs for CRI-O ...
	I1123 08:50:23.229241 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 08:50:23.298646 1187534 logs.go:123] Gathering logs for container status ...
	I1123 08:50:23.298734 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 08:50:23.352858 1187534 logs.go:123] Gathering logs for kubelet ...
	I1123 08:50:23.352934 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 08:50:23.470434 1187534 logs.go:123] Gathering logs for dmesg ...
	I1123 08:50:23.470470 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 08:50:23.488779 1187534 logs.go:123] Gathering logs for describe nodes ...
	I1123 08:50:23.488808 1187534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1123 08:50:24.578410 1202718 pod_ready.go:94] pod "coredns-66bc5c9577-p8fzx" is "Ready"
	I1123 08:50:24.578440 1202718 pod_ready.go:86] duration metric: took 4.005835132s for pod "coredns-66bc5c9577-p8fzx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:24.581302 1202718 pod_ready.go:83] waiting for pod "etcd-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:50:26.586439 1202718 pod_ready.go:104] pod "etcd-pause-041000" is not "Ready", error: <nil>
	I1123 08:50:28.087562 1202718 pod_ready.go:94] pod "etcd-pause-041000" is "Ready"
	I1123 08:50:28.087590 1202718 pod_ready.go:86] duration metric: took 3.506265831s for pod "etcd-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.090226 1202718 pod_ready.go:83] waiting for pod "kube-apiserver-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.095484 1202718 pod_ready.go:94] pod "kube-apiserver-pause-041000" is "Ready"
	I1123 08:50:28.095514 1202718 pod_ready.go:86] duration metric: took 5.262375ms for pod "kube-apiserver-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.098064 1202718 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.102787 1202718 pod_ready.go:94] pod "kube-controller-manager-pause-041000" is "Ready"
	I1123 08:50:28.102816 1202718 pod_ready.go:86] duration metric: took 4.728835ms for pod "kube-controller-manager-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.105346 1202718 pod_ready.go:83] waiting for pod "kube-proxy-jzpjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.286092 1202718 pod_ready.go:94] pod "kube-proxy-jzpjt" is "Ready"
	I1123 08:50:28.286122 1202718 pod_ready.go:86] duration metric: took 180.750695ms for pod "kube-proxy-jzpjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:28.486102 1202718 pod_ready.go:83] waiting for pod "kube-scheduler-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:29.286171 1202718 pod_ready.go:94] pod "kube-scheduler-pause-041000" is "Ready"
	I1123 08:50:29.286200 1202718 pod_ready.go:86] duration metric: took 800.068363ms for pod "kube-scheduler-pause-041000" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:50:29.286213 1202718 pod_ready.go:40] duration metric: took 8.717685176s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:50:29.336965 1202718 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:50:29.339907 1202718 out.go:179] * Done! kubectl is now configured to use "pause-041000" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.490002711Z" level=info msg="Created container 5706a7e3c437e929a3040132d53857e200f691e33708663ad34cf3739ffa9fa5: kube-system/etcd-pause-041000/etcd" id=23a3d8bd-7d03-4fdb-bb07-b536042af6df name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.490921506Z" level=info msg="Starting container: 8930f41e87c1a562cfa6ee0b54b62abcdfa7cb763db50ee729cccd3950c35f8d" id=2211fa7c-c178-44a9-aa33-5c00b6f456a3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.49939631Z" level=info msg="Starting container: 5706a7e3c437e929a3040132d53857e200f691e33708663ad34cf3739ffa9fa5" id=7755e48a-872b-4075-83ee-bb3b711df221 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.501095162Z" level=info msg="Starting container: 3ad7b30765627018f51db658fa83ba644bdf45e1aba371d0fb38f45a91374bcd" id=1695823e-594c-4813-bc91-9a0716296136 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.502846065Z" level=info msg="Started container" PID=2388 containerID=8930f41e87c1a562cfa6ee0b54b62abcdfa7cb763db50ee729cccd3950c35f8d description=kube-system/kindnet-pzr9x/kindnet-cni id=2211fa7c-c178-44a9-aa33-5c00b6f456a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e2c95d12a6b3610a922f16a275c4d0a24af5fa8872275fdda90330aa3b49bfd
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.504364581Z" level=info msg="Started container" PID=2385 containerID=5706a7e3c437e929a3040132d53857e200f691e33708663ad34cf3739ffa9fa5 description=kube-system/etcd-pause-041000/etcd id=7755e48a-872b-4075-83ee-bb3b711df221 name=/runtime.v1.RuntimeService/StartContainer sandboxID=888cec96d3d406ea3499df9b84f2f6829ba443323ea1b3f27f5919cb99374ece
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.507896451Z" level=info msg="Started container" PID=2372 containerID=3ad7b30765627018f51db658fa83ba644bdf45e1aba371d0fb38f45a91374bcd description=kube-system/kube-controller-manager-pause-041000/kube-controller-manager id=1695823e-594c-4813-bc91-9a0716296136 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b902b4f9183bd972221bff77da33f1b1501315fc678e1c4355f39446fe4769f
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.570618426Z" level=info msg="Created container 72f2b8001c28bfca63c62972110a9ad36c4820ca3ef16e526e0a844ca887492c: kube-system/kube-proxy-jzpjt/kube-proxy" id=e42c951c-a23d-444d-b080-63383b9a0a7f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.57136344Z" level=info msg="Starting container: 72f2b8001c28bfca63c62972110a9ad36c4820ca3ef16e526e0a844ca887492c" id=c11efcb3-7e32-4b11-9b71-26e4f576c3d1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:50:12 pause-041000 crio[2079]: time="2025-11-23T08:50:12.573643709Z" level=info msg="Started container" PID=2395 containerID=72f2b8001c28bfca63c62972110a9ad36c4820ca3ef16e526e0a844ca887492c description=kube-system/kube-proxy-jzpjt/kube-proxy id=c11efcb3-7e32-4b11-9b71-26e4f576c3d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=efab7da3b2afb9e5e93bd4746c5403e5084a57446bd1702bd47a19e3729c83bf
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.844972685Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.864163933Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.864200814Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.864227841Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.872143505Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.872325432Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.872406792Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.877102931Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.877335811Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.877420707Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.884778232Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.884934536Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.885017389Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.889416411Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:50:22 pause-041000 crio[2079]: time="2025-11-23T08:50:22.889729822Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	72f2b8001c28b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   efab7da3b2afb       kube-proxy-jzpjt                       kube-system
	8930f41e87c1a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   5e2c95d12a6b3       kindnet-pzr9x                          kube-system
	5706a7e3c437e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago       Running             etcd                      1                   888cec96d3d40       etcd-pause-041000                      kube-system
	3ad7b30765627       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago       Running             kube-controller-manager   1                   0b902b4f9183b       kube-controller-manager-pause-041000   kube-system
	0f27e4bf55b60       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   14991bae5f05b       kube-scheduler-pause-041000            kube-system
	47c71e2fb9574       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   22 seconds ago       Running             kube-apiserver            1                   f0399c83b9c60       kube-apiserver-pause-041000            kube-system
	7f4c49daf75ac       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   c05ed95a18a9e       coredns-66bc5c9577-p8fzx               kube-system
	45dc8732cc188       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   c05ed95a18a9e       coredns-66bc5c9577-p8fzx               kube-system
	9b1e6a2484dc9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   5e2c95d12a6b3       kindnet-pzr9x                          kube-system
	014caef83785d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   efab7da3b2afb       kube-proxy-jzpjt                       kube-system
	7868a019e831a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   f0399c83b9c60       kube-apiserver-pause-041000            kube-system
	0f807846d8d95       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   14991bae5f05b       kube-scheduler-pause-041000            kube-system
	08bc69e75e392       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   0b902b4f9183b       kube-controller-manager-pause-041000   kube-system
	daa63f34218f6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   888cec96d3d40       etcd-pause-041000                      kube-system
	
	
	==> coredns [45dc8732cc188eafe78085325272d3d984a7f717166cd0187bd52e340fe5512f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41535 - 50635 "HINFO IN 7569954815826303863.104617039468642547. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024224512s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7f4c49daf75ac06cb4aa4e7ca85ebdd1cd16f76be10a2e5f73b954fc7c75b042] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40822 - 5964 "HINFO IN 4564197123057285760.4762303896874097968. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01681234s
	
	
	==> describe nodes <==
	Name:               pause-041000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-041000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=pause-041000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_49_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:49:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-041000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:50:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:49:59 +0000   Sun, 23 Nov 2025 08:49:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:49:59 +0000   Sun, 23 Nov 2025 08:49:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:49:59 +0000   Sun, 23 Nov 2025 08:49:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:49:59 +0000   Sun, 23 Nov 2025 08:49:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-041000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                0f3e7557-0353-4e9b-a0f6-47a4a416f8d8
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-p8fzx                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-041000                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         82s
	  kube-system                 kindnet-pzr9x                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-041000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-041000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-jzpjt                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-041000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 75s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   NodeHasSufficientPID     89s (x7 over 89s)  kubelet          Node pause-041000 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 89s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  89s (x7 over 89s)  kubelet          Node pause-041000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    89s (x7 over 89s)  kubelet          Node pause-041000 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 89s                kubelet          Starting kubelet.
	  Normal   Starting                 82s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 82s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s                kubelet          Node pause-041000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s                kubelet          Node pause-041000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s                kubelet          Node pause-041000 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                node-controller  Node pause-041000 event: Registered Node pause-041000 in Controller
	  Normal   NodeReady                35s                kubelet          Node pause-041000 status is now: NodeReady
	  Normal   RegisteredNode           12s                node-controller  Node pause-041000 event: Registered Node pause-041000 in Controller
	
	
	==> dmesg <==
	[Nov23 08:23] overlayfs: idmapped layers are currently not supported
	[ +45.736894] overlayfs: idmapped layers are currently not supported
	[Nov23 08:25] overlayfs: idmapped layers are currently not supported
	[  +2.559069] overlayfs: idmapped layers are currently not supported
	[Nov23 08:26] overlayfs: idmapped layers are currently not supported
	[ +51.342642] overlayfs: idmapped layers are currently not supported
	[Nov23 08:28] overlayfs: idmapped layers are currently not supported
	[Nov23 08:32] overlayfs: idmapped layers are currently not supported
	[Nov23 08:33] overlayfs: idmapped layers are currently not supported
	[Nov23 08:34] overlayfs: idmapped layers are currently not supported
	[Nov23 08:35] overlayfs: idmapped layers are currently not supported
	[Nov23 08:36] overlayfs: idmapped layers are currently not supported
	[Nov23 08:37] overlayfs: idmapped layers are currently not supported
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5706a7e3c437e929a3040132d53857e200f691e33708663ad34cf3739ffa9fa5] <==
	{"level":"warn","ts":"2025-11-23T08:50:17.900275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:17.927903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:17.953245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:17.966172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:17.993735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.022499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.093116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.139425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.153535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.183633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.198590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.228490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.260058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.292880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.309269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.337409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.363981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.403066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.431680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.462288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.522644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.575614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.584838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.604009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:50:18.736083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40458","server-name":"","error":"EOF"}
	
	
	==> etcd [daa63f34218f6ccf5f1c984a5efedf997465def664bd74e9558bce3ec2095793] <==
	{"level":"warn","ts":"2025-11-23T08:49:09.194248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:49:09.224162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:49:09.292196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:49:09.297812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:49:09.341476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:49:09.357795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:49:09.533173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35138","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:50:03.840501Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-23T08:50:03.840557Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-041000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-23T08:50:03.840650Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T08:50:03.977861Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T08:50:03.977940Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:50:03.977962Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-23T08:50:03.978032Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-23T08:50:03.978102Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T08:50:03.978128Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T08:50:03.978137Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:50:03.978138Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-23T08:50:03.978176Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T08:50:03.978184Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T08:50:03.978191Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:50:03.981406Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-23T08:50:03.981485Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:50:03.981520Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T08:50:03.981528Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-041000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 08:50:34 up  9:32,  0 user,  load average: 3.41, 2.67, 2.22
	Linux pause-041000 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8930f41e87c1a562cfa6ee0b54b62abcdfa7cb763db50ee729cccd3950c35f8d] <==
	I1123 08:50:12.629566       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:50:12.630493       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:50:12.630678       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:50:12.630728       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:50:12.630767       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:50:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:50:12.844735       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:50:12.844825       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:50:12.844879       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:50:12.845929       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:50:19.645108       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:50:19.645234       1 metrics.go:72] Registering metrics
	I1123 08:50:19.645334       1 controller.go:711] "Syncing nftables rules"
	I1123 08:50:22.844564       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:50:22.844628       1 main.go:301] handling current node
	I1123 08:50:32.847141       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:50:32.847177       1 main.go:301] handling current node
	
	
	==> kindnet [9b1e6a2484dc967bcde959062922c706ed1adf1100c5f4604bf3898820f2b5ed] <==
	I1123 08:49:18.638872       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:49:18.639314       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:49:18.639992       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:49:18.640061       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:49:18.640102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:49:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:49:18.845346       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:49:18.848194       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:49:18.848290       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:49:18.848505       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:49:48.849974       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 08:49:48.850099       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:49:48.850215       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:49:48.928660       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 08:49:50.048588       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:49:50.048637       1 metrics.go:72] Registering metrics
	I1123 08:49:50.048732       1 controller.go:711] "Syncing nftables rules"
	I1123 08:49:58.851829       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:49:58.851883       1 main.go:301] handling current node
	
	
	==> kube-apiserver [47c71e2fb9574a058e5a5920e44ae120194a58b368bf86420a497c977179f436] <==
	I1123 08:50:19.602136       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 08:50:19.602437       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 08:50:19.602453       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 08:50:19.602621       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:50:19.607244       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:50:19.614209       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 08:50:19.614347       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:50:19.618104       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 08:50:19.618171       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:50:19.618300       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 08:50:19.618329       1 policy_source.go:240] refreshing policies
	I1123 08:50:19.618413       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 08:50:19.618505       1 aggregator.go:171] initial CRD sync complete...
	I1123 08:50:19.618518       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:50:19.618523       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:50:19.618528       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:50:19.619737       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:50:19.671500       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1123 08:50:19.723470       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:50:20.271819       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:50:21.464145       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:50:22.926002       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:50:23.164056       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:50:23.214895       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:50:23.276029       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [7868a019e831a2a996406b4d89aeecb6d8d06950ed992d7b8da380cb642f8763] <==
	W1123 08:50:03.862273       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862360       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862412       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862458       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862507       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862554       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862605       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862650       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862708       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862755       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862908       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.862966       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863028       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863090       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863134       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863321       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863519       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863570       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863617       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863663       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863708       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863753       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.863986       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.864067       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 08:50:03.864125       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [08bc69e75e392dc8dcaa5e734aca0453b6fa3505a2d07872309a1e92aa887ca2] <==
	I1123 08:49:17.413431       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:49:17.406998       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:49:17.408345       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:49:17.409611       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:49:17.406423       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:49:17.406820       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:49:17.406783       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-041000" podCIDRs=["10.244.0.0/24"]
	I1123 08:49:17.414580       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:49:17.414702       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:49:17.414718       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:49:17.414768       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:49:17.429173       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:49:17.429318       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:49:17.433226       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:49:17.453889       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:49:17.456709       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:49:17.459326       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:49:17.459406       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:49:17.459291       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:49:17.459267       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:49:17.462151       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:49:17.462266       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-041000"
	I1123 08:49:17.462331       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:49:17.479091       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:50:02.468790       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [3ad7b30765627018f51db658fa83ba644bdf45e1aba371d0fb38f45a91374bcd] <==
	I1123 08:50:22.890084       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:50:22.891604       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:50:22.893529       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:50:22.895891       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:50:22.898835       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:50:22.901581       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:50:22.904787       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:50:22.914304       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:50:22.914914       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:50:22.915029       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:50:22.915083       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:50:22.915111       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:50:22.915336       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:50:22.919117       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:50:22.920344       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:50:22.928209       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:50:22.930524       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:50:22.936207       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:50:22.949371       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:50:22.957067       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:50:22.957234       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:50:22.957333       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-041000"
	I1123 08:50:22.957406       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 08:50:22.970650       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:50:22.972831       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [014caef83785d9e215263867e2ad026cec906105ca6cef8f64db232c473788dc] <==
	I1123 08:49:18.686133       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:49:18.764976       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:49:18.868342       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:49:18.877834       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:49:18.877942       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:49:18.972431       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:49:18.972548       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:49:18.984288       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:49:18.984622       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:49:18.984763       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:49:18.986977       1 config.go:200] "Starting service config controller"
	I1123 08:49:18.987036       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:49:18.987092       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:49:18.987121       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:49:18.987165       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:49:18.987283       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:49:18.988011       1 config.go:309] "Starting node config controller"
	I1123 08:49:18.988062       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:49:18.988090       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:49:19.088632       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:49:19.088688       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:49:19.088424       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [72f2b8001c28bfca63c62972110a9ad36c4820ca3ef16e526e0a844ca887492c] <==
	I1123 08:50:16.191359       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:50:18.866067       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:50:19.776493       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:50:19.776608       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:50:19.776733       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:50:19.831650       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:50:19.831752       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:50:19.840686       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:50:19.841068       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:50:19.841086       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:50:19.846790       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:50:19.846875       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:50:19.847385       1 config.go:200] "Starting service config controller"
	I1123 08:50:19.847493       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:50:19.847819       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:50:19.847826       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:50:19.848170       1 config.go:309] "Starting node config controller"
	I1123 08:50:19.848180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:50:19.848186       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:50:19.947447       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:50:19.948611       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:50:19.948700       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0f27e4bf55b6041def1c3334330ebd093381737fe017224f64088735e7725ee3] <==
	I1123 08:50:16.815501       1 serving.go:386] Generated self-signed cert in-memory
	W1123 08:50:19.447572       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 08:50:19.447686       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:50:19.447721       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 08:50:19.447751       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 08:50:19.560763       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:50:19.560857       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:50:19.563100       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:50:19.571301       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:50:19.572335       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:50:19.572491       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:50:19.675424       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [0f807846d8d95db888a7aa3b3464682121926c4e8c8de2fb4a034403188441bc] <==
	E1123 08:49:11.427665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:49:11.429882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:49:11.430038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:49:11.430229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:49:11.430333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:49:11.430406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:49:11.430485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:49:11.430595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:49:11.430719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:49:11.430800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:49:11.431120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:49:11.431342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:49:11.431434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:49:11.431510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:49:11.431568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:49:11.431613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:49:11.431662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:49:11.431849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1123 08:49:13.014794       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:50:03.843839       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1123 08:50:03.843859       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1123 08:50:03.843879       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1123 08:50:03.843906       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:50:03.844125       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1123 08:50:03.844140       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 23 08:50:12 pause-041000 kubelet[1307]: I1123 08:50:12.362318    1307 scope.go:117] "RemoveContainer" containerID="9b1e6a2484dc967bcde959062922c706ed1adf1100c5f4604bf3898820f2b5ed"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.362948    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-p8fzx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="449bd814-b2c4-445c-8341-2d6fd4035f0e" pod="kube-system/coredns-66bc5c9577-p8fzx"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.363129    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="dc7adce6a209181bcf58f5abfc21fa44" pod="kube-system/kube-controller-manager-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.363504    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="7cc0754c87d623873a73f56f4f590f1e" pod="kube-system/kube-scheduler-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.363672    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a07005e87541b45ab343768ee149a754" pod="kube-system/etcd-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.363833    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="05758d343c56073bd323082828a7f2e9" pod="kube-system/kube-apiserver-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.363993    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-pzr9x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="26b37d37-77bd-4372-9d14-476cd4f1e851" pod="kube-system/kindnet-pzr9x"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: I1123 08:50:12.405693    1307 scope.go:117] "RemoveContainer" containerID="014caef83785d9e215263867e2ad026cec906105ca6cef8f64db232c473788dc"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.406125    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="7cc0754c87d623873a73f56f4f590f1e" pod="kube-system/kube-scheduler-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.406638    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a07005e87541b45ab343768ee149a754" pod="kube-system/etcd-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.406906    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="05758d343c56073bd323082828a7f2e9" pod="kube-system/kube-apiserver-pause-041000"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.407701    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzpjt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d43fb6ce-e107-46b2-9d52-19736141dc91" pod="kube-system/kube-proxy-jzpjt"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.408071    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-pzr9x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="26b37d37-77bd-4372-9d14-476cd4f1e851" pod="kube-system/kindnet-pzr9x"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.408464    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-p8fzx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="449bd814-b2c4-445c-8341-2d6fd4035f0e" pod="kube-system/coredns-66bc5c9577-p8fzx"
	Nov 23 08:50:12 pause-041000 kubelet[1307]: E1123 08:50:12.408774    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-041000\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="dc7adce6a209181bcf58f5abfc21fa44" pod="kube-system/kube-controller-manager-pause-041000"
	Nov 23 08:50:19 pause-041000 kubelet[1307]: E1123 08:50:19.312601    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-041000\" is forbidden: User \"system:node:pause-041000\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-041000' and this object" podUID="7cc0754c87d623873a73f56f4f590f1e" pod="kube-system/kube-scheduler-pause-041000"
	Nov 23 08:50:19 pause-041000 kubelet[1307]: E1123 08:50:19.312845    1307 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-041000\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-041000' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 23 08:50:19 pause-041000 kubelet[1307]: E1123 08:50:19.312956    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-041000\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-041000' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 23 08:50:19 pause-041000 kubelet[1307]: E1123 08:50:19.313427    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-041000\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-041000' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 23 08:50:19 pause-041000 kubelet[1307]: E1123 08:50:19.432310    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-041000\" is forbidden: User \"system:node:pause-041000\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-041000' and this object" podUID="a07005e87541b45ab343768ee149a754" pod="kube-system/etcd-pause-041000"
	Nov 23 08:50:19 pause-041000 kubelet[1307]: E1123 08:50:19.500425    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-041000\" is forbidden: User \"system:node:pause-041000\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-041000' and this object" podUID="05758d343c56073bd323082828a7f2e9" pod="kube-system/kube-apiserver-pause-041000"
	Nov 23 08:50:23 pause-041000 kubelet[1307]: W1123 08:50:23.317956    1307 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 23 08:50:29 pause-041000 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:50:29 pause-041000 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:50:29 pause-041000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-041000 -n pause-041000
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-041000 -n pause-041000: exit status 2 (369.224275ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-041000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-283312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-283312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (266.740826ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:54:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-283312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-283312 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-283312 describe deploy/metrics-server -n kube-system: exit status 1 (79.122932ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-283312 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-283312
helpers_test.go:243: (dbg) docker inspect old-k8s-version-283312:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3",
	        "Created": "2025-11-23T08:53:01.800677774Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1219954,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:53:01.863585731Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/hosts",
	        "LogPath": "/var/lib/docker/containers/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3-json.log",
	        "Name": "/old-k8s-version-283312",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-283312:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-283312",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3",
	                "LowerDir": "/var/lib/docker/overlay2/f7800adf0bb2faf578ed2bf4a26065d85d982030afc07cc96e1142a50ec29c06-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f7800adf0bb2faf578ed2bf4a26065d85d982030afc07cc96e1142a50ec29c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f7800adf0bb2faf578ed2bf4a26065d85d982030afc07cc96e1142a50ec29c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f7800adf0bb2faf578ed2bf4a26065d85d982030afc07cc96e1142a50ec29c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-283312",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-283312/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-283312",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-283312",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-283312",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e418d6290f06c0da56df18b87f3d6ac3679920d4bce632d8abe16cdb5b0fef6e",
	            "SandboxKey": "/var/run/docker/netns/e418d6290f06",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34512"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34513"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34516"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34514"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34515"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-283312": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:2f:56:3a:19:22",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c3c4615dfb11778d84973791ecb3bc879152d7ae7a1ee624548096be909deb9",
	                    "EndpointID": "37970d111318ead09b87977a1d85573f01b5ba2ce3f66a665ac4c3da9eeb30db",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-283312",
	                        "205e5ea134d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-283312 -n old-k8s-version-283312
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-283312 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-283312 logs -n 25: (1.137628823s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-082524 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo containerd config dump                                                                                                                                                                                                  │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo crio config                                                                                                                                                                                                             │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ delete  │ -p cilium-082524                                                                                                                                                                                                                              │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │ 23 Nov 25 08:51 UTC │
	│ start   │ -p force-systemd-env-498438 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-498438  │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p kubernetes-upgrade-354226                                                                                                                                                                                                                  │ kubernetes-upgrade-354226 │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p cert-expiration-322507 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-322507    │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p force-systemd-env-498438                                                                                                                                                                                                                   │ force-systemd-env-498438  │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p cert-options-194318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-194318       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ ssh     │ cert-options-194318 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-194318       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ ssh     │ -p cert-options-194318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194318       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p cert-options-194318                                                                                                                                                                                                                        │ cert-options-194318       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-283312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:52:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:52:55.820688 1219561 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:52:55.821196 1219561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:52:55.821242 1219561 out.go:374] Setting ErrFile to fd 2...
	I1123 08:52:55.821263 1219561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:52:55.821590 1219561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:52:55.822040 1219561 out.go:368] Setting JSON to false
	I1123 08:52:55.823294 1219561 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34521,"bootTime":1763853455,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:52:55.823398 1219561 start.go:143] virtualization:  
	I1123 08:52:55.827270 1219561 out.go:179] * [old-k8s-version-283312] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:52:55.831886 1219561 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:52:55.831981 1219561 notify.go:221] Checking for updates...
	I1123 08:52:55.838600 1219561 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:52:55.841841 1219561 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:52:55.845057 1219561 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:52:55.848286 1219561 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:52:55.851529 1219561 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:52:55.855113 1219561 config.go:182] Loaded profile config "cert-expiration-322507": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:52:55.855291 1219561 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:52:55.887523 1219561 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:52:55.887659 1219561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:52:55.946150 1219561 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:52:55.930138689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:52:55.946263 1219561 docker.go:319] overlay module found
	I1123 08:52:55.949607 1219561 out.go:179] * Using the docker driver based on user configuration
	I1123 08:52:55.952552 1219561 start.go:309] selected driver: docker
	I1123 08:52:55.952573 1219561 start.go:927] validating driver "docker" against <nil>
	I1123 08:52:55.952587 1219561 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:52:55.953299 1219561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:52:56.011631 1219561 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:52:56.000474245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:52:56.011820 1219561 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:52:56.012084 1219561 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:52:56.015164 1219561 out.go:179] * Using Docker driver with root privileges
	I1123 08:52:56.018150 1219561 cni.go:84] Creating CNI manager for ""
	I1123 08:52:56.018228 1219561 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:52:56.018241 1219561 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:52:56.018331 1219561 start.go:353] cluster config:
	{Name:old-k8s-version-283312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:52:56.021608 1219561 out.go:179] * Starting "old-k8s-version-283312" primary control-plane node in "old-k8s-version-283312" cluster
	I1123 08:52:56.024478 1219561 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:52:56.027392 1219561 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:52:56.030271 1219561 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:52:56.030328 1219561 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 08:52:56.030358 1219561 cache.go:65] Caching tarball of preloaded images
	I1123 08:52:56.030357 1219561 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:52:56.030445 1219561 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:52:56.030456 1219561 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 08:52:56.030558 1219561 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/config.json ...
	I1123 08:52:56.030574 1219561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/config.json: {Name:mk498d77c8529350e8a4fbc7c916dfff9304de2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:52:56.049634 1219561 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:52:56.049657 1219561 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:52:56.049677 1219561 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:52:56.049709 1219561 start.go:360] acquireMachinesLock for old-k8s-version-283312: {Name:mk6342c5cc3dd03ef4a67a137840af521342123c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:52:56.049818 1219561 start.go:364] duration metric: took 88.334µs to acquireMachinesLock for "old-k8s-version-283312"
	I1123 08:52:56.049851 1219561 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-283312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:52:56.049928 1219561 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:52:56.053390 1219561 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:52:56.053625 1219561 start.go:159] libmachine.API.Create for "old-k8s-version-283312" (driver="docker")
	I1123 08:52:56.053662 1219561 client.go:173] LocalClient.Create starting
	I1123 08:52:56.053729 1219561 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem
	I1123 08:52:56.053765 1219561 main.go:143] libmachine: Decoding PEM data...
	I1123 08:52:56.053791 1219561 main.go:143] libmachine: Parsing certificate...
	I1123 08:52:56.053849 1219561 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem
	I1123 08:52:56.053874 1219561 main.go:143] libmachine: Decoding PEM data...
	I1123 08:52:56.053894 1219561 main.go:143] libmachine: Parsing certificate...
	I1123 08:52:56.054459 1219561 cli_runner.go:164] Run: docker network inspect old-k8s-version-283312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:52:56.071664 1219561 cli_runner.go:211] docker network inspect old-k8s-version-283312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:52:56.071751 1219561 network_create.go:284] running [docker network inspect old-k8s-version-283312] to gather additional debugging logs...
	I1123 08:52:56.071773 1219561 cli_runner.go:164] Run: docker network inspect old-k8s-version-283312
	W1123 08:52:56.088452 1219561 cli_runner.go:211] docker network inspect old-k8s-version-283312 returned with exit code 1
	I1123 08:52:56.088492 1219561 network_create.go:287] error running [docker network inspect old-k8s-version-283312]: docker network inspect old-k8s-version-283312: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-283312 not found
	I1123 08:52:56.088507 1219561 network_create.go:289] output of [docker network inspect old-k8s-version-283312]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-283312 not found
	
	** /stderr **
	I1123 08:52:56.088639 1219561 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:52:56.105580 1219561 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-32d396d9f7df IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:9b:29:4a:5c:ab} reservation:<nil>}
	I1123 08:52:56.105845 1219561 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-859c97accd92 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:ea:cf:62:f4:f8} reservation:<nil>}
	I1123 08:52:56.106196 1219561 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-50e966d7b39a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:1d:b6:b9:b9:ef} reservation:<nil>}
	I1123 08:52:56.106529 1219561 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8725bce588cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:da:b5:ff:0f:7b:43} reservation:<nil>}
	I1123 08:52:56.106952 1219561 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3e210}
	I1123 08:52:56.106975 1219561 network_create.go:124] attempt to create docker network old-k8s-version-283312 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 08:52:56.107030 1219561 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-283312 old-k8s-version-283312
	I1123 08:52:56.166179 1219561 network_create.go:108] docker network old-k8s-version-283312 192.168.85.0/24 created
	I1123 08:52:56.166210 1219561 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-283312" container
	I1123 08:52:56.166299 1219561 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:52:56.182497 1219561 cli_runner.go:164] Run: docker volume create old-k8s-version-283312 --label name.minikube.sigs.k8s.io=old-k8s-version-283312 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:52:56.200428 1219561 oci.go:103] Successfully created a docker volume old-k8s-version-283312
	I1123 08:52:56.200520 1219561 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-283312-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-283312 --entrypoint /usr/bin/test -v old-k8s-version-283312:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:52:56.772138 1219561 oci.go:107] Successfully prepared a docker volume old-k8s-version-283312
	I1123 08:52:56.772206 1219561 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:52:56.772222 1219561 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:52:56.772295 1219561 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-283312:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:53:01.728811 1219561 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-283312:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.956474307s)
	I1123 08:53:01.728847 1219561 kic.go:203] duration metric: took 4.956621626s to extract preloaded images to volume ...
	W1123 08:53:01.728984 1219561 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:53:01.729093 1219561 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:53:01.786718 1219561 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-283312 --name old-k8s-version-283312 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-283312 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-283312 --network old-k8s-version-283312 --ip 192.168.85.2 --volume old-k8s-version-283312:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:53:02.139397 1219561 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Running}}
	I1123 08:53:02.166010 1219561 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:53:02.189462 1219561 cli_runner.go:164] Run: docker exec old-k8s-version-283312 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:53:02.250560 1219561 oci.go:144] the created container "old-k8s-version-283312" has a running status.
	I1123 08:53:02.250585 1219561 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa...
	I1123 08:53:02.877464 1219561 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:53:02.896348 1219561 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:53:02.921686 1219561 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:53:02.921722 1219561 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-283312 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:53:02.994379 1219561 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:53:03.014748 1219561 machine.go:94] provisionDockerMachine start ...
	I1123 08:53:03.014849 1219561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:53:03.031936 1219561 main.go:143] libmachine: Using SSH client type: native
	I1123 08:53:03.032285 1219561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34512 <nil> <nil>}
	I1123 08:53:03.032299 1219561 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:53:03.032882 1219561 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34972->127.0.0.1:34512: read: connection reset by peer
	I1123 08:53:06.187125 1219561 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-283312
	
	I1123 08:53:06.187149 1219561 ubuntu.go:182] provisioning hostname "old-k8s-version-283312"
	I1123 08:53:06.187253 1219561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:53:06.205940 1219561 main.go:143] libmachine: Using SSH client type: native
	I1123 08:53:06.206299 1219561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34512 <nil> <nil>}
	I1123 08:53:06.206312 1219561 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-283312 && echo "old-k8s-version-283312" | sudo tee /etc/hostname
	I1123 08:53:06.375316 1219561 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-283312
	
	I1123 08:53:06.375408 1219561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:53:06.392951 1219561 main.go:143] libmachine: Using SSH client type: native
	I1123 08:53:06.393305 1219561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34512 <nil> <nil>}
	I1123 08:53:06.393334 1219561 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-283312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-283312/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-283312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:53:06.547533 1219561 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:53:06.547559 1219561 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:53:06.547588 1219561 ubuntu.go:190] setting up certificates
	I1123 08:53:06.547597 1219561 provision.go:84] configureAuth start
	I1123 08:53:06.547680 1219561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-283312
	I1123 08:53:06.564429 1219561 provision.go:143] copyHostCerts
	I1123 08:53:06.564500 1219561 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:53:06.564509 1219561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:53:06.564589 1219561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:53:06.564686 1219561 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:53:06.564692 1219561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:53:06.564718 1219561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:53:06.564768 1219561 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:53:06.564773 1219561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:53:06.564795 1219561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:53:06.564845 1219561 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-283312 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-283312]
	I1123 08:53:06.623477 1219561 provision.go:177] copyRemoteCerts
	I1123 08:53:06.623551 1219561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:53:06.623609 1219561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:53:06.643009 1219561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:53:06.746949 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 08:53:06.765246 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:53:06.783229 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:53:06.802213 1219561 provision.go:87] duration metric: took 254.592078ms to configureAuth
	I1123 08:53:06.802241 1219561 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:53:06.802422 1219561 config.go:182] Loaded profile config "old-k8s-version-283312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:53:06.802525 1219561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:53:06.821454 1219561 main.go:143] libmachine: Using SSH client type: native
	I1123 08:53:06.821770 1219561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34512 <nil> <nil>}
	I1123 08:53:06.821790 1219561 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:53:07.127510 1219561 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:53:07.127537 1219561 machine.go:97] duration metric: took 4.112762846s to provisionDockerMachine
	I1123 08:53:07.127549 1219561 client.go:176] duration metric: took 11.073875044s to LocalClient.Create
	I1123 08:53:07.127562 1219561 start.go:167] duration metric: took 11.073938763s to libmachine.API.Create "old-k8s-version-283312"
	I1123 08:53:07.127580 1219561 start.go:293] postStartSetup for "old-k8s-version-283312" (driver="docker")
	I1123 08:53:07.127594 1219561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:53:07.127672 1219561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:53:07.127715 1219561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:53:07.145604 1219561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:53:07.251171 1219561 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:53:07.254563 1219561 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:53:07.254593 1219561 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:53:07.254605 1219561 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:53:07.254662 1219561 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:53:07.254744 1219561 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:53:07.254857 1219561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:53:07.262026 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:53:07.279644 1219561 start.go:296] duration metric: took 152.045287ms for postStartSetup
	I1123 08:53:07.280072 1219561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-283312
	I1123 08:53:07.297547 1219561 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/config.json ...
	I1123 08:53:07.297819 1219561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:53:07.297866 1219561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:53:07.315349 1219561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:53:07.420030 1219561 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:53:07.424715 1219561 start.go:128] duration metric: took 11.374768077s to createHost
	I1123 08:53:07.424738 1219561 start.go:83] releasing machines lock for "old-k8s-version-283312", held for 11.374906493s
	I1123 08:53:07.424838 1219561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-283312
	I1123 08:53:07.441417 1219561 ssh_runner.go:195] Run: cat /version.json
	I1123 08:53:07.441469 1219561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:53:07.441749 1219561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:53:07.441800 1219561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:53:07.463411 1219561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:53:07.475133 1219561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:53:07.566805 1219561 ssh_runner.go:195] Run: systemctl --version
	I1123 08:53:07.656762 1219561 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:53:07.694915 1219561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:53:07.699688 1219561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:53:07.699778 1219561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:53:07.729958 1219561 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:53:07.730000 1219561 start.go:496] detecting cgroup driver to use...
	I1123 08:53:07.730034 1219561 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:53:07.730097 1219561 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:53:07.748961 1219561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:53:07.761993 1219561 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:53:07.762084 1219561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:53:07.778806 1219561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:53:07.797217 1219561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:53:07.916834 1219561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:53:08.049798 1219561 docker.go:234] disabling docker service ...
	I1123 08:53:08.049915 1219561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:53:08.076636 1219561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:53:08.089784 1219561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:53:08.201459 1219561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:53:08.322147 1219561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:53:08.334487 1219561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:53:08.347722 1219561 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1123 08:53:08.347788 1219561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:53:08.356267 1219561 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:53:08.356414 1219561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:53:08.367760 1219561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:53:08.378088 1219561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:53:08.386753 1219561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:53:08.394735 1219561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:53:08.403346 1219561 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:53:08.416454 1219561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:53:08.425131 1219561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:53:08.432493 1219561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:53:08.439982 1219561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:53:08.561499 1219561 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:53:08.750235 1219561 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:53:08.750309 1219561 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:53:08.753868 1219561 start.go:564] Will wait 60s for crictl version
	I1123 08:53:08.753983 1219561 ssh_runner.go:195] Run: which crictl
	I1123 08:53:08.757192 1219561 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:53:08.784873 1219561 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:53:08.784954 1219561 ssh_runner.go:195] Run: crio --version
	I1123 08:53:08.815802 1219561 ssh_runner.go:195] Run: crio --version
	I1123 08:53:08.846558 1219561 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1123 08:53:08.849370 1219561 cli_runner.go:164] Run: docker network inspect old-k8s-version-283312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:53:08.865707 1219561 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:53:08.869946 1219561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:53:08.879517 1219561 kubeadm.go:884] updating cluster {Name:old-k8s-version-283312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:53:08.879635 1219561 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:53:08.879694 1219561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:53:08.912508 1219561 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:53:08.912534 1219561 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:53:08.912597 1219561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:53:08.943396 1219561 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:53:08.943423 1219561 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:53:08.943432 1219561 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1123 08:53:08.943527 1219561 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-283312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:53:08.943625 1219561 ssh_runner.go:195] Run: crio config
	I1123 08:53:09.013149 1219561 cni.go:84] Creating CNI manager for ""
	I1123 08:53:09.013186 1219561 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:53:09.013210 1219561 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:53:09.013234 1219561 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-283312 NodeName:old-k8s-version-283312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:53:09.013401 1219561 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-283312"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:53:09.013497 1219561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 08:53:09.021825 1219561 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:53:09.021903 1219561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:53:09.029147 1219561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1123 08:53:09.041893 1219561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:53:09.054277 1219561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1123 08:53:09.066506 1219561 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:53:09.070276 1219561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:53:09.086511 1219561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:53:09.205692 1219561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:53:09.220920 1219561 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312 for IP: 192.168.85.2
	I1123 08:53:09.220981 1219561 certs.go:195] generating shared ca certs ...
	I1123 08:53:09.221013 1219561 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:53:09.221189 1219561 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:53:09.221266 1219561 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:53:09.221301 1219561 certs.go:257] generating profile certs ...
	I1123 08:53:09.221379 1219561 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.key
	I1123 08:53:09.221417 1219561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt with IP's: []
	I1123 08:53:09.545460 1219561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt ...
	I1123 08:53:09.545492 1219561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: {Name:mk1ce3675c40d1eddfe39314ee609a5d402b083e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:53:09.545687 1219561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.key ...
	I1123 08:53:09.545700 1219561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.key: {Name:mkbb8c33f54cec756a13bcda2b2529bb2d1406e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:53:09.545793 1219561 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.key.0b5b326f
	I1123 08:53:09.545813 1219561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.crt.0b5b326f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:53:09.888022 1219561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.crt.0b5b326f ...
	I1123 08:53:09.888054 1219561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.crt.0b5b326f: {Name:mk453f1e287090c177b71f3f2c1ae21b39699642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:53:09.888224 1219561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.key.0b5b326f ...
	I1123 08:53:09.888238 1219561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.key.0b5b326f: {Name:mk0b49def0ebbb97460f72aa7aad4d7e003fd3bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:53:09.888327 1219561 certs.go:382] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.crt.0b5b326f -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.crt
	I1123 08:53:09.888411 1219561 certs.go:386] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.key.0b5b326f -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.key
	I1123 08:53:09.888479 1219561 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.key
	I1123 08:53:09.888499 1219561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.crt with IP's: []
	I1123 08:53:10.224161 1219561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.crt ...
	I1123 08:53:10.224192 1219561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.crt: {Name:mk75517e1d74b1e62a5f7f9e25793f74f449e4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:53:10.224359 1219561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.key ...
	I1123 08:53:10.224374 1219561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.key: {Name:mk810a3d213765b8d6af9e5d257a94f4e7cfa543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:53:10.224547 1219561 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:53:10.224592 1219561 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:53:10.224609 1219561 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:53:10.224637 1219561 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:53:10.224666 1219561 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:53:10.224697 1219561 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:53:10.224744 1219561 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:53:10.225309 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:53:10.245641 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:53:10.263172 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:53:10.280480 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:53:10.298497 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 08:53:10.314835 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:53:10.332391 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:53:10.350294 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:53:10.371609 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:53:10.389353 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 08:53:10.406129 1219561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:53:10.424077 1219561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:53:10.437505 1219561 ssh_runner.go:195] Run: openssl version
	I1123 08:53:10.443732 1219561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:53:10.451890 1219561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:53:10.455705 1219561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:53:10.455806 1219561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:53:10.496203 1219561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:53:10.504498 1219561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:53:10.512482 1219561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:53:10.516209 1219561 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:53:10.516316 1219561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:53:10.557046 1219561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:53:10.565333 1219561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:53:10.573338 1219561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:53:10.576812 1219561 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:53:10.576876 1219561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:53:10.617407 1219561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:53:10.626847 1219561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:53:10.630248 1219561 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:53:10.630319 1219561 kubeadm.go:401] StartCluster: {Name:old-k8s-version-283312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:53:10.630458 1219561 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:53:10.630521 1219561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:53:10.657920 1219561 cri.go:89] found id: ""
	I1123 08:53:10.657994 1219561 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:53:10.665758 1219561 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:53:10.673789 1219561 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:53:10.673852 1219561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:53:10.681954 1219561 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:53:10.682022 1219561 kubeadm.go:158] found existing configuration files:
	
	I1123 08:53:10.682083 1219561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:53:10.690027 1219561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:53:10.690098 1219561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:53:10.697386 1219561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:53:10.705071 1219561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:53:10.705137 1219561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:53:10.712456 1219561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:53:10.719830 1219561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:53:10.719899 1219561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:53:10.729048 1219561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:53:10.736724 1219561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:53:10.736805 1219561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:53:10.743836 1219561 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:53:10.789006 1219561 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1123 08:53:10.789074 1219561 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:53:10.826338 1219561 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:53:10.826413 1219561 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:53:10.826454 1219561 kubeadm.go:319] OS: Linux
	I1123 08:53:10.826503 1219561 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:53:10.826556 1219561 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:53:10.826607 1219561 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:53:10.826659 1219561 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:53:10.826711 1219561 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:53:10.826770 1219561 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:53:10.826824 1219561 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:53:10.826880 1219561 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:53:10.826930 1219561 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:53:10.918963 1219561 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:53:10.919118 1219561 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:53:10.919261 1219561 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1123 08:53:11.092047 1219561 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:53:11.095063 1219561 out.go:252]   - Generating certificates and keys ...
	I1123 08:53:11.095223 1219561 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:53:11.095453 1219561 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:53:11.570481 1219561 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:53:11.780917 1219561 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:53:12.288540 1219561 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:53:12.682742 1219561 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:53:12.876841 1219561 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:53:12.877198 1219561 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-283312] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:53:13.162181 1219561 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:53:13.162816 1219561 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-283312] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:53:14.668958 1219561 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:53:14.991861 1219561 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:53:15.853172 1219561 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:53:15.853415 1219561 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:53:16.247942 1219561 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:53:16.944353 1219561 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:53:17.919542 1219561 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:53:18.737904 1219561 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:53:18.738596 1219561 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:53:18.743124 1219561 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:53:18.746585 1219561 out.go:252]   - Booting up control plane ...
	I1123 08:53:18.746679 1219561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:53:18.746757 1219561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:53:18.747777 1219561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:53:18.763383 1219561 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:53:18.764439 1219561 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:53:18.764492 1219561 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:53:18.906728 1219561 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1123 08:53:26.912022 1219561 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.005724 seconds
	I1123 08:53:26.912153 1219561 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:53:26.931014 1219561 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:53:27.465641 1219561 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:53:27.465880 1219561 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-283312 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:53:27.978580 1219561 kubeadm.go:319] [bootstrap-token] Using token: pv80df.18h4erjucthd5n42
	I1123 08:53:27.981599 1219561 out.go:252]   - Configuring RBAC rules ...
	I1123 08:53:27.981737 1219561 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:53:27.989121 1219561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:53:27.998187 1219561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:53:28.009686 1219561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:53:28.015276 1219561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:53:28.021970 1219561 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:53:28.038958 1219561 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:53:28.286648 1219561 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:53:28.446661 1219561 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:53:28.450598 1219561 kubeadm.go:319] 
	I1123 08:53:28.450677 1219561 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:53:28.450687 1219561 kubeadm.go:319] 
	I1123 08:53:28.450760 1219561 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:53:28.450774 1219561 kubeadm.go:319] 
	I1123 08:53:28.450799 1219561 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:53:28.451249 1219561 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:53:28.451309 1219561 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:53:28.451317 1219561 kubeadm.go:319] 
	I1123 08:53:28.451368 1219561 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:53:28.451374 1219561 kubeadm.go:319] 
	I1123 08:53:28.451419 1219561 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:53:28.451428 1219561 kubeadm.go:319] 
	I1123 08:53:28.451476 1219561 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:53:28.451550 1219561 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:53:28.451627 1219561 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:53:28.451636 1219561 kubeadm.go:319] 
	I1123 08:53:28.451981 1219561 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:53:28.452071 1219561 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:53:28.452080 1219561 kubeadm.go:319] 
	I1123 08:53:28.452399 1219561 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token pv80df.18h4erjucthd5n42 \
	I1123 08:53:28.452545 1219561 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb \
	I1123 08:53:28.452626 1219561 kubeadm.go:319] 	--control-plane 
	I1123 08:53:28.452634 1219561 kubeadm.go:319] 
	I1123 08:53:28.452755 1219561 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:53:28.452760 1219561 kubeadm.go:319] 
	I1123 08:53:28.452881 1219561 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token pv80df.18h4erjucthd5n42 \
	I1123 08:53:28.453026 1219561 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb 
	I1123 08:53:28.461984 1219561 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:53:28.462102 1219561 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:53:28.462123 1219561 cni.go:84] Creating CNI manager for ""
	I1123 08:53:28.462133 1219561 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:53:28.465576 1219561 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:53:28.468397 1219561 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:53:28.473793 1219561 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1123 08:53:28.473811 1219561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:53:28.501640 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:53:29.480147 1219561 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:53:29.480282 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:29.480373 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-283312 minikube.k8s.io/updated_at=2025_11_23T08_53_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=old-k8s-version-283312 minikube.k8s.io/primary=true
	I1123 08:53:29.495006 1219561 ops.go:34] apiserver oom_adj: -16
	I1123 08:53:29.630777 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:30.131390 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:30.630817 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:31.130865 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:31.631004 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:32.131327 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:32.630920 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:33.131146 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:33.630865 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:34.131541 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:34.631814 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:35.131483 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:35.631136 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:36.131769 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:36.630886 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:37.131592 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:37.631130 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:38.130849 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:38.630837 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:39.130973 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:39.631539 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:40.130907 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:40.630870 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:41.131496 1219561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:53:41.273179 1219561 kubeadm.go:1114] duration metric: took 11.792940367s to wait for elevateKubeSystemPrivileges
	I1123 08:53:41.273210 1219561 kubeadm.go:403] duration metric: took 30.642895546s to StartCluster
	I1123 08:53:41.273226 1219561 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:53:41.273286 1219561 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:53:41.274223 1219561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:53:41.274426 1219561 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:53:41.274549 1219561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:53:41.274800 1219561 config.go:182] Loaded profile config "old-k8s-version-283312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:53:41.274838 1219561 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:53:41.274895 1219561 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-283312"
	I1123 08:53:41.274908 1219561 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-283312"
	I1123 08:53:41.274929 1219561 host.go:66] Checking if "old-k8s-version-283312" exists ...
	I1123 08:53:41.275525 1219561 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:53:41.275971 1219561 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-283312"
	I1123 08:53:41.275996 1219561 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-283312"
	I1123 08:53:41.276250 1219561 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:53:41.277975 1219561 out.go:179] * Verifying Kubernetes components...
	I1123 08:53:41.281259 1219561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:53:41.302953 1219561 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:53:41.306971 1219561 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:53:41.306994 1219561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:53:41.307070 1219561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:53:41.313600 1219561 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-283312"
	I1123 08:53:41.313635 1219561 host.go:66] Checking if "old-k8s-version-283312" exists ...
	I1123 08:53:41.314052 1219561 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:53:41.348126 1219561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:53:41.357147 1219561 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:53:41.357166 1219561 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:53:41.357230 1219561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:53:41.383398 1219561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34512 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:53:41.548476 1219561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:53:41.554731 1219561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:53:41.554893 1219561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:53:41.600773 1219561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:53:42.384938 1219561 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-283312" to be "Ready" ...
	I1123 08:53:42.385307 1219561 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 08:53:42.425314 1219561 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:53:42.428143 1219561 addons.go:530] duration metric: took 1.153302021s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:53:42.889823 1219561 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-283312" context rescaled to 1 replicas
	W1123 08:53:44.388290 1219561 node_ready.go:57] node "old-k8s-version-283312" has "Ready":"False" status (will retry)
	W1123 08:53:46.388547 1219561 node_ready.go:57] node "old-k8s-version-283312" has "Ready":"False" status (will retry)
	W1123 08:53:48.888637 1219561 node_ready.go:57] node "old-k8s-version-283312" has "Ready":"False" status (will retry)
	W1123 08:53:51.387890 1219561 node_ready.go:57] node "old-k8s-version-283312" has "Ready":"False" status (will retry)
	W1123 08:53:53.388434 1219561 node_ready.go:57] node "old-k8s-version-283312" has "Ready":"False" status (will retry)
	W1123 08:53:55.888468 1219561 node_ready.go:57] node "old-k8s-version-283312" has "Ready":"False" status (will retry)
	I1123 08:53:56.389114 1219561 node_ready.go:49] node "old-k8s-version-283312" is "Ready"
	I1123 08:53:56.389144 1219561 node_ready.go:38] duration metric: took 14.004125855s for node "old-k8s-version-283312" to be "Ready" ...
	I1123 08:53:56.389160 1219561 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:53:56.389222 1219561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:53:56.401483 1219561 api_server.go:72] duration metric: took 15.127024011s to wait for apiserver process to appear ...
	I1123 08:53:56.401506 1219561 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:53:56.401530 1219561 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:53:56.412070 1219561 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 08:53:56.413702 1219561 api_server.go:141] control plane version: v1.28.0
	I1123 08:53:56.413729 1219561 api_server.go:131] duration metric: took 12.212327ms to wait for apiserver health ...
	I1123 08:53:56.413739 1219561 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:53:56.417610 1219561 system_pods.go:59] 8 kube-system pods found
	I1123 08:53:56.417655 1219561 system_pods.go:61] "coredns-5dd5756b68-mpf62" [29956376-ee4e-402e-98dc-864a4ff169d3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:53:56.417662 1219561 system_pods.go:61] "etcd-old-k8s-version-283312" [171ec724-181b-4c1c-814b-7b3eb801b010] Running
	I1123 08:53:56.417671 1219561 system_pods.go:61] "kindnet-fnbgj" [ff60f979-e04b-41da-8682-971a31d72da3] Running
	I1123 08:53:56.417681 1219561 system_pods.go:61] "kube-apiserver-old-k8s-version-283312" [68187f7b-ab9d-4cda-97c7-0559bc9c6b8b] Running
	I1123 08:53:56.417691 1219561 system_pods.go:61] "kube-controller-manager-old-k8s-version-283312" [6824fd9a-3bcc-4856-b840-5f6c6866e870] Running
	I1123 08:53:56.417695 1219561 system_pods.go:61] "kube-proxy-5w4q4" [886c8da3-dfce-4d49-b73c-6799d52d1028] Running
	I1123 08:53:56.417708 1219561 system_pods.go:61] "kube-scheduler-old-k8s-version-283312" [e1a93883-3c97-4c10-abcc-8917c5752ebf] Running
	I1123 08:53:56.417714 1219561 system_pods.go:61] "storage-provisioner" [f8356741-0113-4d0f-b602-081220c219b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:53:56.417721 1219561 system_pods.go:74] duration metric: took 3.97608ms to wait for pod list to return data ...
	I1123 08:53:56.417733 1219561 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:53:56.420410 1219561 default_sa.go:45] found service account: "default"
	I1123 08:53:56.420433 1219561 default_sa.go:55] duration metric: took 2.690429ms for default service account to be created ...
	I1123 08:53:56.420443 1219561 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:53:56.424002 1219561 system_pods.go:86] 8 kube-system pods found
	I1123 08:53:56.424034 1219561 system_pods.go:89] "coredns-5dd5756b68-mpf62" [29956376-ee4e-402e-98dc-864a4ff169d3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:53:56.424042 1219561 system_pods.go:89] "etcd-old-k8s-version-283312" [171ec724-181b-4c1c-814b-7b3eb801b010] Running
	I1123 08:53:56.424050 1219561 system_pods.go:89] "kindnet-fnbgj" [ff60f979-e04b-41da-8682-971a31d72da3] Running
	I1123 08:53:56.424054 1219561 system_pods.go:89] "kube-apiserver-old-k8s-version-283312" [68187f7b-ab9d-4cda-97c7-0559bc9c6b8b] Running
	I1123 08:53:56.424059 1219561 system_pods.go:89] "kube-controller-manager-old-k8s-version-283312" [6824fd9a-3bcc-4856-b840-5f6c6866e870] Running
	I1123 08:53:56.424063 1219561 system_pods.go:89] "kube-proxy-5w4q4" [886c8da3-dfce-4d49-b73c-6799d52d1028] Running
	I1123 08:53:56.424067 1219561 system_pods.go:89] "kube-scheduler-old-k8s-version-283312" [e1a93883-3c97-4c10-abcc-8917c5752ebf] Running
	I1123 08:53:56.424073 1219561 system_pods.go:89] "storage-provisioner" [f8356741-0113-4d0f-b602-081220c219b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:53:56.424100 1219561 retry.go:31] will retry after 287.597498ms: missing components: kube-dns
	I1123 08:53:56.717626 1219561 system_pods.go:86] 8 kube-system pods found
	I1123 08:53:56.717662 1219561 system_pods.go:89] "coredns-5dd5756b68-mpf62" [29956376-ee4e-402e-98dc-864a4ff169d3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:53:56.717669 1219561 system_pods.go:89] "etcd-old-k8s-version-283312" [171ec724-181b-4c1c-814b-7b3eb801b010] Running
	I1123 08:53:56.717676 1219561 system_pods.go:89] "kindnet-fnbgj" [ff60f979-e04b-41da-8682-971a31d72da3] Running
	I1123 08:53:56.717680 1219561 system_pods.go:89] "kube-apiserver-old-k8s-version-283312" [68187f7b-ab9d-4cda-97c7-0559bc9c6b8b] Running
	I1123 08:53:56.717684 1219561 system_pods.go:89] "kube-controller-manager-old-k8s-version-283312" [6824fd9a-3bcc-4856-b840-5f6c6866e870] Running
	I1123 08:53:56.717688 1219561 system_pods.go:89] "kube-proxy-5w4q4" [886c8da3-dfce-4d49-b73c-6799d52d1028] Running
	I1123 08:53:56.717692 1219561 system_pods.go:89] "kube-scheduler-old-k8s-version-283312" [e1a93883-3c97-4c10-abcc-8917c5752ebf] Running
	I1123 08:53:56.717699 1219561 system_pods.go:89] "storage-provisioner" [f8356741-0113-4d0f-b602-081220c219b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:53:56.717714 1219561 retry.go:31] will retry after 317.751517ms: missing components: kube-dns
	I1123 08:53:57.039932 1219561 system_pods.go:86] 8 kube-system pods found
	I1123 08:53:57.039967 1219561 system_pods.go:89] "coredns-5dd5756b68-mpf62" [29956376-ee4e-402e-98dc-864a4ff169d3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:53:57.039974 1219561 system_pods.go:89] "etcd-old-k8s-version-283312" [171ec724-181b-4c1c-814b-7b3eb801b010] Running
	I1123 08:53:57.039983 1219561 system_pods.go:89] "kindnet-fnbgj" [ff60f979-e04b-41da-8682-971a31d72da3] Running
	I1123 08:53:57.039997 1219561 system_pods.go:89] "kube-apiserver-old-k8s-version-283312" [68187f7b-ab9d-4cda-97c7-0559bc9c6b8b] Running
	I1123 08:53:57.040005 1219561 system_pods.go:89] "kube-controller-manager-old-k8s-version-283312" [6824fd9a-3bcc-4856-b840-5f6c6866e870] Running
	I1123 08:53:57.040010 1219561 system_pods.go:89] "kube-proxy-5w4q4" [886c8da3-dfce-4d49-b73c-6799d52d1028] Running
	I1123 08:53:57.040018 1219561 system_pods.go:89] "kube-scheduler-old-k8s-version-283312" [e1a93883-3c97-4c10-abcc-8917c5752ebf] Running
	I1123 08:53:57.040024 1219561 system_pods.go:89] "storage-provisioner" [f8356741-0113-4d0f-b602-081220c219b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:53:57.040038 1219561 retry.go:31] will retry after 423.023608ms: missing components: kube-dns
	I1123 08:53:57.467339 1219561 system_pods.go:86] 8 kube-system pods found
	I1123 08:53:57.467377 1219561 system_pods.go:89] "coredns-5dd5756b68-mpf62" [29956376-ee4e-402e-98dc-864a4ff169d3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:53:57.467385 1219561 system_pods.go:89] "etcd-old-k8s-version-283312" [171ec724-181b-4c1c-814b-7b3eb801b010] Running
	I1123 08:53:57.467391 1219561 system_pods.go:89] "kindnet-fnbgj" [ff60f979-e04b-41da-8682-971a31d72da3] Running
	I1123 08:53:57.467396 1219561 system_pods.go:89] "kube-apiserver-old-k8s-version-283312" [68187f7b-ab9d-4cda-97c7-0559bc9c6b8b] Running
	I1123 08:53:57.467400 1219561 system_pods.go:89] "kube-controller-manager-old-k8s-version-283312" [6824fd9a-3bcc-4856-b840-5f6c6866e870] Running
	I1123 08:53:57.467403 1219561 system_pods.go:89] "kube-proxy-5w4q4" [886c8da3-dfce-4d49-b73c-6799d52d1028] Running
	I1123 08:53:57.467407 1219561 system_pods.go:89] "kube-scheduler-old-k8s-version-283312" [e1a93883-3c97-4c10-abcc-8917c5752ebf] Running
	I1123 08:53:57.467413 1219561 system_pods.go:89] "storage-provisioner" [f8356741-0113-4d0f-b602-081220c219b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:53:57.467434 1219561 retry.go:31] will retry after 493.965642ms: missing components: kube-dns
	I1123 08:53:57.965815 1219561 system_pods.go:86] 8 kube-system pods found
	I1123 08:53:57.965847 1219561 system_pods.go:89] "coredns-5dd5756b68-mpf62" [29956376-ee4e-402e-98dc-864a4ff169d3] Running
	I1123 08:53:57.965854 1219561 system_pods.go:89] "etcd-old-k8s-version-283312" [171ec724-181b-4c1c-814b-7b3eb801b010] Running
	I1123 08:53:57.965859 1219561 system_pods.go:89] "kindnet-fnbgj" [ff60f979-e04b-41da-8682-971a31d72da3] Running
	I1123 08:53:57.965863 1219561 system_pods.go:89] "kube-apiserver-old-k8s-version-283312" [68187f7b-ab9d-4cda-97c7-0559bc9c6b8b] Running
	I1123 08:53:57.965870 1219561 system_pods.go:89] "kube-controller-manager-old-k8s-version-283312" [6824fd9a-3bcc-4856-b840-5f6c6866e870] Running
	I1123 08:53:57.965874 1219561 system_pods.go:89] "kube-proxy-5w4q4" [886c8da3-dfce-4d49-b73c-6799d52d1028] Running
	I1123 08:53:57.965878 1219561 system_pods.go:89] "kube-scheduler-old-k8s-version-283312" [e1a93883-3c97-4c10-abcc-8917c5752ebf] Running
	I1123 08:53:57.965882 1219561 system_pods.go:89] "storage-provisioner" [f8356741-0113-4d0f-b602-081220c219b4] Running
	I1123 08:53:57.965890 1219561 system_pods.go:126] duration metric: took 1.545404297s to wait for k8s-apps to be running ...
	I1123 08:53:57.965898 1219561 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:53:57.965962 1219561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:53:57.979447 1219561 system_svc.go:56] duration metric: took 13.539733ms WaitForService to wait for kubelet
	I1123 08:53:57.979474 1219561 kubeadm.go:587] duration metric: took 16.705026408s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:53:57.979493 1219561 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:53:57.982643 1219561 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:53:57.982691 1219561 node_conditions.go:123] node cpu capacity is 2
	I1123 08:53:57.982704 1219561 node_conditions.go:105] duration metric: took 3.205312ms to run NodePressure ...
	I1123 08:53:57.982715 1219561 start.go:242] waiting for startup goroutines ...
	I1123 08:53:57.982731 1219561 start.go:247] waiting for cluster config update ...
	I1123 08:53:57.982750 1219561 start.go:256] writing updated cluster config ...
	I1123 08:53:57.983127 1219561 ssh_runner.go:195] Run: rm -f paused
	I1123 08:53:57.987295 1219561 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:53:57.991858 1219561 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mpf62" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:53:57.997586 1219561 pod_ready.go:94] pod "coredns-5dd5756b68-mpf62" is "Ready"
	I1123 08:53:57.997620 1219561 pod_ready.go:86] duration metric: took 5.734637ms for pod "coredns-5dd5756b68-mpf62" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:53:58.005727 1219561 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:53:58.011831 1219561 pod_ready.go:94] pod "etcd-old-k8s-version-283312" is "Ready"
	I1123 08:53:58.011861 1219561 pod_ready.go:86] duration metric: took 6.099354ms for pod "etcd-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:53:58.015413 1219561 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:53:58.020842 1219561 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-283312" is "Ready"
	I1123 08:53:58.020876 1219561 pod_ready.go:86] duration metric: took 5.427314ms for pod "kube-apiserver-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:53:58.024122 1219561 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:53:58.391667 1219561 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-283312" is "Ready"
	I1123 08:53:58.391699 1219561 pod_ready.go:86] duration metric: took 367.547141ms for pod "kube-controller-manager-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:53:58.592812 1219561 pod_ready.go:83] waiting for pod "kube-proxy-5w4q4" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:53:58.991396 1219561 pod_ready.go:94] pod "kube-proxy-5w4q4" is "Ready"
	I1123 08:53:58.991423 1219561 pod_ready.go:86] duration metric: took 398.585506ms for pod "kube-proxy-5w4q4" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:53:59.192300 1219561 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:53:59.591428 1219561 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-283312" is "Ready"
	I1123 08:53:59.591456 1219561 pod_ready.go:86] duration metric: took 399.129178ms for pod "kube-scheduler-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:53:59.591470 1219561 pod_ready.go:40] duration metric: took 1.604140505s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:53:59.644718 1219561 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 08:53:59.647797 1219561 out.go:203] 
	W1123 08:53:59.650672 1219561 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 08:53:59.653541 1219561 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 08:53:59.657323 1219561 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-283312" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:53:56 old-k8s-version-283312 crio[840]: time="2025-11-23T08:53:56.74153997Z" level=info msg="Created container 6c985f449d71e1f9d2c90a6dcf6692029f6ffbe2f005c9f43f2837f536bcfc79: kube-system/coredns-5dd5756b68-mpf62/coredns" id=120cd754-0c7d-4ee7-b2c9-a7b2772ffbf4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:53:56 old-k8s-version-283312 crio[840]: time="2025-11-23T08:53:56.742293599Z" level=info msg="Starting container: 6c985f449d71e1f9d2c90a6dcf6692029f6ffbe2f005c9f43f2837f536bcfc79" id=49761983-f974-4df9-984e-f0add12256db name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:53:56 old-k8s-version-283312 crio[840]: time="2025-11-23T08:53:56.744101229Z" level=info msg="Started container" PID=1962 containerID=6c985f449d71e1f9d2c90a6dcf6692029f6ffbe2f005c9f43f2837f536bcfc79 description=kube-system/coredns-5dd5756b68-mpf62/coredns id=49761983-f974-4df9-984e-f0add12256db name=/runtime.v1.RuntimeService/StartContainer sandboxID=a67031000359375bfccfc03f2a22b3ff6fbcf4d45ab69bdd4b7c870e3ee0af4c
	Nov 23 08:54:00 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:00.1667379Z" level=info msg="Running pod sandbox: default/busybox/POD" id=50564342-b9ff-46a2-9b71-de1aea7a1030 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:54:00 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:00.166833241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:54:00 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:00.18176889Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:17d970531deeec36f4f90eec36924fa1d1c0848b738051b995c9d1f35339b6ea UID:a288cc83-ae5e-414e-b584-9cd4bebbd5e8 NetNS:/var/run/netns/571edfe2-b15b-458e-875d-23269a189b4b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b368}] Aliases:map[]}"
	Nov 23 08:54:00 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:00.181834587Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 08:54:00 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:00.200776757Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:17d970531deeec36f4f90eec36924fa1d1c0848b738051b995c9d1f35339b6ea UID:a288cc83-ae5e-414e-b584-9cd4bebbd5e8 NetNS:/var/run/netns/571edfe2-b15b-458e-875d-23269a189b4b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b368}] Aliases:map[]}"
	Nov 23 08:54:00 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:00.201269166Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 08:54:00 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:00.209588446Z" level=info msg="Ran pod sandbox 17d970531deeec36f4f90eec36924fa1d1c0848b738051b995c9d1f35339b6ea with infra container: default/busybox/POD" id=50564342-b9ff-46a2-9b71-de1aea7a1030 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:54:00 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:00.2117614Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=86bfa30a-b3f5-4261-afc6-2e212ac9209b name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:54:00 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:00.212550006Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=86bfa30a-b3f5-4261-afc6-2e212ac9209b name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:54:00 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:00.21263071Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=86bfa30a-b3f5-4261-afc6-2e212ac9209b name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:54:00 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:00.213762601Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3489b194-f3cc-4c3e-bfb5-cf5ebbe82caa name=/runtime.v1.ImageService/PullImage
	Nov 23 08:54:00 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:00.217058396Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:54:02 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:02.21239653Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=3489b194-f3cc-4c3e-bfb5-cf5ebbe82caa name=/runtime.v1.ImageService/PullImage
	Nov 23 08:54:02 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:02.213585396Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=882b6e84-e5e1-4612-9c09-149df0b4b19e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:54:02 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:02.216076454Z" level=info msg="Creating container: default/busybox/busybox" id=46a65e8d-2c46-4e98-97ae-ea49363c50c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:54:02 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:02.216271919Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:54:02 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:02.221172512Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:54:02 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:02.221713404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:54:02 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:02.239072191Z" level=info msg="Created container b4dca499df6268edb69faa19c086c53f7e9a45ebf0a5e18b1f13828c118f255b: default/busybox/busybox" id=46a65e8d-2c46-4e98-97ae-ea49363c50c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:54:02 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:02.240475229Z" level=info msg="Starting container: b4dca499df6268edb69faa19c086c53f7e9a45ebf0a5e18b1f13828c118f255b" id=7902a0fa-8ecb-43e8-9d1a-827505416ca9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:54:02 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:02.243119366Z" level=info msg="Started container" PID=2015 containerID=b4dca499df6268edb69faa19c086c53f7e9a45ebf0a5e18b1f13828c118f255b description=default/busybox/busybox id=7902a0fa-8ecb-43e8-9d1a-827505416ca9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=17d970531deeec36f4f90eec36924fa1d1c0848b738051b995c9d1f35339b6ea
	Nov 23 08:54:08 old-k8s-version-283312 crio[840]: time="2025-11-23T08:54:08.089439841Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	b4dca499df626       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   17d970531deee       busybox                                          default
	6c985f449d71e       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   a670310003593       coredns-5dd5756b68-mpf62                         kube-system
	8f4febfac7663       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   9ade3bd6185fc       storage-provisioner                              kube-system
	7aaad690d6305       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   6bad9a8b04e5e       kindnet-fnbgj                                    kube-system
	c14ae2076d41c       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   156fca3cab48f       kube-proxy-5w4q4                                 kube-system
	0cc5ba66b4f43       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   749f185f123ea       kube-scheduler-old-k8s-version-283312            kube-system
	eabefa9ea430c       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   6c85aa65f810e       kube-controller-manager-old-k8s-version-283312   kube-system
	b0365b001fc04       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   51372cd22a48a       etcd-old-k8s-version-283312                      kube-system
	92f6977d31863       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   4d3b4c037819b       kube-apiserver-old-k8s-version-283312            kube-system
	
	
	==> coredns [6c985f449d71e1f9d2c90a6dcf6692029f6ffbe2f005c9f43f2837f536bcfc79] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41940 - 64924 "HINFO IN 4332557640223907241.8862381240299224187. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.04531946s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-283312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-283312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=old-k8s-version-283312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_53_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:53:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-283312
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:53:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:53:59 +0000   Sun, 23 Nov 2025 08:53:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:53:59 +0000   Sun, 23 Nov 2025 08:53:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:53:59 +0000   Sun, 23 Nov 2025 08:53:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:53:59 +0000   Sun, 23 Nov 2025 08:53:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-283312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                392eb6cc-4f42-4cea-8c55-b6ca8bbf6612
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-mpf62                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-283312                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-fnbgj                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-283312             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-283312    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-5w4q4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-283312             100m (5%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-283312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-283312 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-283312 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node old-k8s-version-283312 event: Registered Node old-k8s-version-283312 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-283312 status is now: NodeReady
	
	
	==> dmesg <==
	[ +51.342642] overlayfs: idmapped layers are currently not supported
	[Nov23 08:28] overlayfs: idmapped layers are currently not supported
	[Nov23 08:32] overlayfs: idmapped layers are currently not supported
	[Nov23 08:33] overlayfs: idmapped layers are currently not supported
	[Nov23 08:34] overlayfs: idmapped layers are currently not supported
	[Nov23 08:35] overlayfs: idmapped layers are currently not supported
	[Nov23 08:36] overlayfs: idmapped layers are currently not supported
	[Nov23 08:37] overlayfs: idmapped layers are currently not supported
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b0365b001fc043c3164be238a77d93e03d7fcba1969fece2a8b4206b9ee92af6] <==
	{"level":"info","ts":"2025-11-23T08:53:20.963502Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T08:53:20.962885Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"9f0758e1c58a86ed","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-23T08:53:20.963782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-23T08:53:20.963924Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-23T08:53:20.963089Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T08:53:20.966254Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T08:53:20.966297Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T08:53:21.917643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-23T08:53:21.917774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-23T08:53:21.917828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-23T08:53:21.917894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-23T08:53:21.917927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T08:53:21.917974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-23T08:53:21.918008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T08:53:21.920056Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:53:21.921608Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-283312 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T08:53:21.921801Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:53:21.923048Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T08:53:21.923145Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:53:21.923622Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T08:53:21.923773Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T08:53:21.923896Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:53:21.924066Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:53:21.924138Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:53:21.937855Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 08:54:09 up  9:36,  0 user,  load average: 3.09, 3.39, 2.63
	Linux old-k8s-version-283312 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7aaad690d63057c6803ff970a84232339da9baadda4b29b58063cc76a50ce332] <==
	I1123 08:53:45.635994       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:53:45.636292       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:53:45.636444       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:53:45.636482       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:53:45.636515       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:53:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:53:45.836731       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:53:45.836807       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:53:45.836841       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:53:45.837565       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:53:46.127490       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:53:46.127610       1 metrics.go:72] Registering metrics
	I1123 08:53:46.127692       1 controller.go:711] "Syncing nftables rules"
	I1123 08:53:55.842360       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:53:55.842413       1 main.go:301] handling current node
	I1123 08:54:05.838653       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:54:05.838687       1 main.go:301] handling current node
	
	
	==> kube-apiserver [92f6977d31863198aa908460dd89e4851c5eedea344dfc323a816f617b34e9db] <==
	I1123 08:53:25.370532       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:53:25.370549       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 08:53:25.370732       1 aggregator.go:166] initial CRD sync complete...
	I1123 08:53:25.370750       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 08:53:25.370755       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:53:25.370761       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:53:25.374870       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 08:53:25.388332       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 08:53:25.391781       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 08:53:25.432096       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:53:26.073578       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:53:26.078827       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:53:26.078855       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:53:26.710979       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:53:26.762698       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:53:26.847061       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:53:26.854224       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:53:26.855438       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 08:53:26.860503       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:53:27.117397       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:53:28.272098       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:53:28.285090       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:53:28.301589       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 08:53:40.847174       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:53:41.076824       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [eabefa9ea430c1e080d950c9e9fa2488fe79c1c908cd849d530c6aafc7814d58] <==
	I1123 08:53:40.976247       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:53:41.011126       1 shared_informer.go:318] Caches are synced for deployment
	I1123 08:53:41.025230       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1123 08:53:41.033767       1 shared_informer.go:318] Caches are synced for disruption
	I1123 08:53:41.042094       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:53:41.096068       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 08:53:41.166619       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-bztw7"
	I1123 08:53:41.197491       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mpf62"
	I1123 08:53:41.214662       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.207028ms"
	I1123 08:53:41.242252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.524337ms"
	I1123 08:53:41.242371       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.931µs"
	I1123 08:53:41.242458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.729µs"
	I1123 08:53:41.391269       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:53:41.392420       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:53:41.392442       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:53:42.436388       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 08:53:42.452066       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-bztw7"
	I1123 08:53:42.470862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.202572ms"
	I1123 08:53:42.481665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.751281ms"
	I1123 08:53:42.481851       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="149.026µs"
	I1123 08:53:56.312268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.477µs"
	I1123 08:53:56.349272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.346µs"
	I1123 08:53:57.708589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.84667ms"
	I1123 08:53:57.708757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.53µs"
	I1123 08:54:00.853364       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [c14ae2076d41c260d4ab45ecb45f6c9f79f88ed4ddc601d7e4c30f9b3a23b660] <==
	I1123 08:53:42.818710       1 server_others.go:69] "Using iptables proxy"
	I1123 08:53:42.833486       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1123 08:53:42.854534       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:53:42.856690       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:53:42.856742       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:53:42.856750       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:53:42.856790       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:53:42.857025       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:53:42.857044       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:53:42.858338       1 config.go:188] "Starting service config controller"
	I1123 08:53:42.859590       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:53:42.859641       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:53:42.859648       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:53:42.864220       1 config.go:315] "Starting node config controller"
	I1123 08:53:42.864306       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:53:42.959753       1 shared_informer.go:318] Caches are synced for service config
	I1123 08:53:42.959768       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 08:53:42.964482       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0cc5ba66b4f43bda9230f21ad1e9a74ca3b24681f97de2ffc1b580090fba7513] <==
	W1123 08:53:25.519587       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1123 08:53:25.519991       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:53:25.519665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 08:53:25.520548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 08:53:25.519713       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1123 08:53:25.520631       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1123 08:53:25.519747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 08:53:25.520842       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1123 08:53:25.519779       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 08:53:25.519815       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 08:53:25.521011       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 08:53:25.520990       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 08:53:26.357673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1123 08:53:26.357799       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1123 08:53:26.372109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 08:53:26.372213       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 08:53:26.387956       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1123 08:53:26.387995       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:53:26.434801       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 08:53:26.434901       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 08:53:26.464393       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1123 08:53:26.464541       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1123 08:53:26.505758       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 08:53:26.505878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1123 08:53:29.303894       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:53:42 old-k8s-version-283312 kubelet[1386]: E1123 08:53:42.090908    1386 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 23 08:53:42 old-k8s-version-283312 kubelet[1386]: E1123 08:53:42.091042    1386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/886c8da3-dfce-4d49-b73c-6799d52d1028-kube-proxy podName:886c8da3-dfce-4d49-b73c-6799d52d1028 nodeName:}" failed. No retries permitted until 2025-11-23 08:53:42.59101281 +0000 UTC m=+14.355730609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/886c8da3-dfce-4d49-b73c-6799d52d1028-kube-proxy") pod "kube-proxy-5w4q4" (UID: "886c8da3-dfce-4d49-b73c-6799d52d1028") : failed to sync configmap cache: timed out waiting for the condition
	Nov 23 08:53:42 old-k8s-version-283312 kubelet[1386]: E1123 08:53:42.107671    1386 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 23 08:53:42 old-k8s-version-283312 kubelet[1386]: E1123 08:53:42.107735    1386 projected.go:198] Error preparing data for projected volume kube-api-access-lnpwt for pod kube-system/kube-proxy-5w4q4: failed to sync configmap cache: timed out waiting for the condition
	Nov 23 08:53:42 old-k8s-version-283312 kubelet[1386]: E1123 08:53:42.107840    1386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/886c8da3-dfce-4d49-b73c-6799d52d1028-kube-api-access-lnpwt podName:886c8da3-dfce-4d49-b73c-6799d52d1028 nodeName:}" failed. No retries permitted until 2025-11-23 08:53:42.607809899 +0000 UTC m=+14.372527689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lnpwt" (UniqueName: "kubernetes.io/projected/886c8da3-dfce-4d49-b73c-6799d52d1028-kube-api-access-lnpwt") pod "kube-proxy-5w4q4" (UID: "886c8da3-dfce-4d49-b73c-6799d52d1028") : failed to sync configmap cache: timed out waiting for the condition
	Nov 23 08:53:42 old-k8s-version-283312 kubelet[1386]: E1123 08:53:42.241146    1386 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 23 08:53:42 old-k8s-version-283312 kubelet[1386]: E1123 08:53:42.241345    1386 projected.go:198] Error preparing data for projected volume kube-api-access-xmfjd for pod kube-system/kindnet-fnbgj: failed to sync configmap cache: timed out waiting for the condition
	Nov 23 08:53:42 old-k8s-version-283312 kubelet[1386]: E1123 08:53:42.241489    1386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff60f979-e04b-41da-8682-971a31d72da3-kube-api-access-xmfjd podName:ff60f979-e04b-41da-8682-971a31d72da3 nodeName:}" failed. No retries permitted until 2025-11-23 08:53:42.741460337 +0000 UTC m=+14.506178127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xmfjd" (UniqueName: "kubernetes.io/projected/ff60f979-e04b-41da-8682-971a31d72da3-kube-api-access-xmfjd") pod "kindnet-fnbgj" (UID: "ff60f979-e04b-41da-8682-971a31d72da3") : failed to sync configmap cache: timed out waiting for the condition
	Nov 23 08:53:42 old-k8s-version-283312 kubelet[1386]: W1123 08:53:42.735514    1386 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/crio-156fca3cab48f17f31a0731544ffe12f26a67926aee0e47dc2dedec22abd4d1b WatchSource:0}: Error finding container 156fca3cab48f17f31a0731544ffe12f26a67926aee0e47dc2dedec22abd4d1b: Status 404 returned error can't find the container with id 156fca3cab48f17f31a0731544ffe12f26a67926aee0e47dc2dedec22abd4d1b
	Nov 23 08:53:45 old-k8s-version-283312 kubelet[1386]: I1123 08:53:45.623356    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5w4q4" podStartSLOduration=5.62331302 podCreationTimestamp="2025-11-23 08:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:53:43.624833885 +0000 UTC m=+15.389551674" watchObservedRunningTime="2025-11-23 08:53:45.62331302 +0000 UTC m=+17.388030818"
	Nov 23 08:53:48 old-k8s-version-283312 kubelet[1386]: I1123 08:53:48.395693    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-fnbgj" podStartSLOduration=5.907375938 podCreationTimestamp="2025-11-23 08:53:40 +0000 UTC" firstStartedPulling="2025-11-23 08:53:43.076753414 +0000 UTC m=+14.841471203" lastFinishedPulling="2025-11-23 08:53:45.565015105 +0000 UTC m=+17.329732895" observedRunningTime="2025-11-23 08:53:45.624210932 +0000 UTC m=+17.388928721" watchObservedRunningTime="2025-11-23 08:53:48.39563763 +0000 UTC m=+20.160355428"
	Nov 23 08:53:56 old-k8s-version-283312 kubelet[1386]: I1123 08:53:56.276378    1386 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 08:53:56 old-k8s-version-283312 kubelet[1386]: I1123 08:53:56.307272    1386 topology_manager.go:215] "Topology Admit Handler" podUID="f8356741-0113-4d0f-b602-081220c219b4" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 08:53:56 old-k8s-version-283312 kubelet[1386]: I1123 08:53:56.312126    1386 topology_manager.go:215] "Topology Admit Handler" podUID="29956376-ee4e-402e-98dc-864a4ff169d3" podNamespace="kube-system" podName="coredns-5dd5756b68-mpf62"
	Nov 23 08:53:56 old-k8s-version-283312 kubelet[1386]: I1123 08:53:56.415629    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2phk7\" (UniqueName: \"kubernetes.io/projected/f8356741-0113-4d0f-b602-081220c219b4-kube-api-access-2phk7\") pod \"storage-provisioner\" (UID: \"f8356741-0113-4d0f-b602-081220c219b4\") " pod="kube-system/storage-provisioner"
	Nov 23 08:53:56 old-k8s-version-283312 kubelet[1386]: I1123 08:53:56.415939    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rmvb\" (UniqueName: \"kubernetes.io/projected/29956376-ee4e-402e-98dc-864a4ff169d3-kube-api-access-9rmvb\") pod \"coredns-5dd5756b68-mpf62\" (UID: \"29956376-ee4e-402e-98dc-864a4ff169d3\") " pod="kube-system/coredns-5dd5756b68-mpf62"
	Nov 23 08:53:56 old-k8s-version-283312 kubelet[1386]: I1123 08:53:56.416116    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f8356741-0113-4d0f-b602-081220c219b4-tmp\") pod \"storage-provisioner\" (UID: \"f8356741-0113-4d0f-b602-081220c219b4\") " pod="kube-system/storage-provisioner"
	Nov 23 08:53:56 old-k8s-version-283312 kubelet[1386]: I1123 08:53:56.416299    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29956376-ee4e-402e-98dc-864a4ff169d3-config-volume\") pod \"coredns-5dd5756b68-mpf62\" (UID: \"29956376-ee4e-402e-98dc-864a4ff169d3\") " pod="kube-system/coredns-5dd5756b68-mpf62"
	Nov 23 08:53:56 old-k8s-version-283312 kubelet[1386]: W1123 08:53:56.678790    1386 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/crio-a67031000359375bfccfc03f2a22b3ff6fbcf4d45ab69bdd4b7c870e3ee0af4c WatchSource:0}: Error finding container a67031000359375bfccfc03f2a22b3ff6fbcf4d45ab69bdd4b7c870e3ee0af4c: Status 404 returned error can't find the container with id a67031000359375bfccfc03f2a22b3ff6fbcf4d45ab69bdd4b7c870e3ee0af4c
	Nov 23 08:53:57 old-k8s-version-283312 kubelet[1386]: I1123 08:53:57.692897    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.692815626 podCreationTimestamp="2025-11-23 08:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:53:57.678324937 +0000 UTC m=+29.443042726" watchObservedRunningTime="2025-11-23 08:53:57.692815626 +0000 UTC m=+29.457533424"
	Nov 23 08:53:57 old-k8s-version-283312 kubelet[1386]: I1123 08:53:57.693463    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mpf62" podStartSLOduration=16.693433759 podCreationTimestamp="2025-11-23 08:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:53:57.692182413 +0000 UTC m=+29.456900211" watchObservedRunningTime="2025-11-23 08:53:57.693433759 +0000 UTC m=+29.458151590"
	Nov 23 08:53:59 old-k8s-version-283312 kubelet[1386]: I1123 08:53:59.863610    1386 topology_manager.go:215] "Topology Admit Handler" podUID="a288cc83-ae5e-414e-b584-9cd4bebbd5e8" podNamespace="default" podName="busybox"
	Nov 23 08:53:59 old-k8s-version-283312 kubelet[1386]: I1123 08:53:59.939019    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhdsj\" (UniqueName: \"kubernetes.io/projected/a288cc83-ae5e-414e-b584-9cd4bebbd5e8-kube-api-access-vhdsj\") pod \"busybox\" (UID: \"a288cc83-ae5e-414e-b584-9cd4bebbd5e8\") " pod="default/busybox"
	Nov 23 08:54:00 old-k8s-version-283312 kubelet[1386]: W1123 08:54:00.204050    1386 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/crio-17d970531deeec36f4f90eec36924fa1d1c0848b738051b995c9d1f35339b6ea WatchSource:0}: Error finding container 17d970531deeec36f4f90eec36924fa1d1c0848b738051b995c9d1f35339b6ea: Status 404 returned error can't find the container with id 17d970531deeec36f4f90eec36924fa1d1c0848b738051b995c9d1f35339b6ea
	Nov 23 08:54:08 old-k8s-version-283312 kubelet[1386]: E1123 08:54:08.143457    1386 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51968->127.0.0.1:43607: write tcp 127.0.0.1:51968->127.0.0.1:43607: write: broken pipe
	
	
	==> storage-provisioner [8f4febfac766351cfdb36de837ff756f947345294f48da8c18960b2f6d2f389b] <==
	I1123 08:53:56.698294       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:53:56.722509       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:53:56.722626       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:53:56.735788       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:53:56.738472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-283312_15187bec-0320-41e1-a806-6468620505c0!
	I1123 08:53:56.739429       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69e256c0-2660-429a-be2b-9531ab7aed97", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-283312_15187bec-0320-41e1-a806-6468620505c0 became leader
	I1123 08:53:56.839418       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-283312_15187bec-0320-41e1-a806-6468620505c0!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-283312 -n old-k8s-version-283312
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-283312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-283312 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-283312 --alsologtostderr -v=1: exit status 80 (2.400258605s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-283312 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:55:22.083968 1225388 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:55:22.084165 1225388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:55:22.084191 1225388 out.go:374] Setting ErrFile to fd 2...
	I1123 08:55:22.084210 1225388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:55:22.084526 1225388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:55:22.084833 1225388 out.go:368] Setting JSON to false
	I1123 08:55:22.084884 1225388 mustload.go:66] Loading cluster: old-k8s-version-283312
	I1123 08:55:22.085351 1225388 config.go:182] Loaded profile config "old-k8s-version-283312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:55:22.085862 1225388 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:55:22.103722 1225388 host.go:66] Checking if "old-k8s-version-283312" exists ...
	I1123 08:55:22.104066 1225388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:55:22.166891 1225388 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 08:55:22.156799862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:55:22.167705 1225388 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-283312 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 08:55:22.171598 1225388 out.go:179] * Pausing node old-k8s-version-283312 ... 
	I1123 08:55:22.174474 1225388 host.go:66] Checking if "old-k8s-version-283312" exists ...
	I1123 08:55:22.174827 1225388 ssh_runner.go:195] Run: systemctl --version
	I1123 08:55:22.174884 1225388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:55:22.192082 1225388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:55:22.297775 1225388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:55:22.311311 1225388 pause.go:52] kubelet running: true
	I1123 08:55:22.311428 1225388 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:55:22.542570 1225388 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:55:22.542676 1225388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:55:22.609922 1225388 cri.go:89] found id: "55f4a448b0d3e5f8186ddea06d9e649d1ce7b5dc009cab7ea7b94a06ee6d2337"
	I1123 08:55:22.609946 1225388 cri.go:89] found id: "fcf4f481baec79c0761b307b5212215829faaa625e6e489cf694d8fb1d2d4062"
	I1123 08:55:22.609951 1225388 cri.go:89] found id: "b31f17b2d91d4748333acd703896175fb35e33ee0b0916ca17f1f3d164797f0b"
	I1123 08:55:22.609955 1225388 cri.go:89] found id: "63c05087bc3492d07d07dc6676698eb369b01ebf027a57cb7753312ef9a68e38"
	I1123 08:55:22.609959 1225388 cri.go:89] found id: "ce8860867859e5b27abf00bdcc1cc203fb3241543231bf6a3915cb8500c83601"
	I1123 08:55:22.609968 1225388 cri.go:89] found id: "ca452ae3435abe579950d1a807b7521d73e868fac14b3362725c339938db9ba9"
	I1123 08:55:22.609971 1225388 cri.go:89] found id: "d7311c2c5699ad0d41a6408dfece98289565e80a60184519834f707726b47a53"
	I1123 08:55:22.609974 1225388 cri.go:89] found id: "7b9b4a6e426f263cb39534651d3c1f27d4fc7d585032ccbec072ae66318023df"
	I1123 08:55:22.609977 1225388 cri.go:89] found id: "247b7aa0c1261bc65c70f1271c4f8036028cf3420d07070ead4ca25228884653"
	I1123 08:55:22.609986 1225388 cri.go:89] found id: "8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb"
	I1123 08:55:22.609989 1225388 cri.go:89] found id: "62de83d6e4fd10e27fd4b0e1f4adf0423f70c4e01537d1a2dd0b9dc5df5f955a"
	I1123 08:55:22.609992 1225388 cri.go:89] found id: ""
	I1123 08:55:22.610042 1225388 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:55:22.630292 1225388 retry.go:31] will retry after 197.82178ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:55:22Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:55:22.828736 1225388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:55:22.842442 1225388 pause.go:52] kubelet running: false
	I1123 08:55:22.842535 1225388 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:55:23.012295 1225388 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:55:23.012401 1225388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:55:23.079078 1225388 cri.go:89] found id: "55f4a448b0d3e5f8186ddea06d9e649d1ce7b5dc009cab7ea7b94a06ee6d2337"
	I1123 08:55:23.079102 1225388 cri.go:89] found id: "fcf4f481baec79c0761b307b5212215829faaa625e6e489cf694d8fb1d2d4062"
	I1123 08:55:23.079107 1225388 cri.go:89] found id: "b31f17b2d91d4748333acd703896175fb35e33ee0b0916ca17f1f3d164797f0b"
	I1123 08:55:23.079110 1225388 cri.go:89] found id: "63c05087bc3492d07d07dc6676698eb369b01ebf027a57cb7753312ef9a68e38"
	I1123 08:55:23.079114 1225388 cri.go:89] found id: "ce8860867859e5b27abf00bdcc1cc203fb3241543231bf6a3915cb8500c83601"
	I1123 08:55:23.079118 1225388 cri.go:89] found id: "ca452ae3435abe579950d1a807b7521d73e868fac14b3362725c339938db9ba9"
	I1123 08:55:23.079154 1225388 cri.go:89] found id: "d7311c2c5699ad0d41a6408dfece98289565e80a60184519834f707726b47a53"
	I1123 08:55:23.079167 1225388 cri.go:89] found id: "7b9b4a6e426f263cb39534651d3c1f27d4fc7d585032ccbec072ae66318023df"
	I1123 08:55:23.079206 1225388 cri.go:89] found id: "247b7aa0c1261bc65c70f1271c4f8036028cf3420d07070ead4ca25228884653"
	I1123 08:55:23.079214 1225388 cri.go:89] found id: "8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb"
	I1123 08:55:23.079218 1225388 cri.go:89] found id: "62de83d6e4fd10e27fd4b0e1f4adf0423f70c4e01537d1a2dd0b9dc5df5f955a"
	I1123 08:55:23.079221 1225388 cri.go:89] found id: ""
	I1123 08:55:23.079268 1225388 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:55:23.092135 1225388 retry.go:31] will retry after 438.300149ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:55:23Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:55:23.530735 1225388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:55:23.544490 1225388 pause.go:52] kubelet running: false
	I1123 08:55:23.544577 1225388 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:55:23.744074 1225388 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:55:23.744273 1225388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:55:23.816627 1225388 cri.go:89] found id: "55f4a448b0d3e5f8186ddea06d9e649d1ce7b5dc009cab7ea7b94a06ee6d2337"
	I1123 08:55:23.816651 1225388 cri.go:89] found id: "fcf4f481baec79c0761b307b5212215829faaa625e6e489cf694d8fb1d2d4062"
	I1123 08:55:23.816656 1225388 cri.go:89] found id: "b31f17b2d91d4748333acd703896175fb35e33ee0b0916ca17f1f3d164797f0b"
	I1123 08:55:23.816661 1225388 cri.go:89] found id: "63c05087bc3492d07d07dc6676698eb369b01ebf027a57cb7753312ef9a68e38"
	I1123 08:55:23.816670 1225388 cri.go:89] found id: "ce8860867859e5b27abf00bdcc1cc203fb3241543231bf6a3915cb8500c83601"
	I1123 08:55:23.816674 1225388 cri.go:89] found id: "ca452ae3435abe579950d1a807b7521d73e868fac14b3362725c339938db9ba9"
	I1123 08:55:23.816677 1225388 cri.go:89] found id: "d7311c2c5699ad0d41a6408dfece98289565e80a60184519834f707726b47a53"
	I1123 08:55:23.816716 1225388 cri.go:89] found id: "7b9b4a6e426f263cb39534651d3c1f27d4fc7d585032ccbec072ae66318023df"
	I1123 08:55:23.816720 1225388 cri.go:89] found id: "247b7aa0c1261bc65c70f1271c4f8036028cf3420d07070ead4ca25228884653"
	I1123 08:55:23.816725 1225388 cri.go:89] found id: "8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb"
	I1123 08:55:23.816734 1225388 cri.go:89] found id: "62de83d6e4fd10e27fd4b0e1f4adf0423f70c4e01537d1a2dd0b9dc5df5f955a"
	I1123 08:55:23.816738 1225388 cri.go:89] found id: ""
	I1123 08:55:23.816816 1225388 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:55:23.828463 1225388 retry.go:31] will retry after 299.580183ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:55:23Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:55:24.129058 1225388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:55:24.142090 1225388 pause.go:52] kubelet running: false
	I1123 08:55:24.142152 1225388 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:55:24.314930 1225388 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:55:24.315009 1225388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:55:24.391454 1225388 cri.go:89] found id: "55f4a448b0d3e5f8186ddea06d9e649d1ce7b5dc009cab7ea7b94a06ee6d2337"
	I1123 08:55:24.391526 1225388 cri.go:89] found id: "fcf4f481baec79c0761b307b5212215829faaa625e6e489cf694d8fb1d2d4062"
	I1123 08:55:24.391567 1225388 cri.go:89] found id: "b31f17b2d91d4748333acd703896175fb35e33ee0b0916ca17f1f3d164797f0b"
	I1123 08:55:24.391585 1225388 cri.go:89] found id: "63c05087bc3492d07d07dc6676698eb369b01ebf027a57cb7753312ef9a68e38"
	I1123 08:55:24.391603 1225388 cri.go:89] found id: "ce8860867859e5b27abf00bdcc1cc203fb3241543231bf6a3915cb8500c83601"
	I1123 08:55:24.391615 1225388 cri.go:89] found id: "ca452ae3435abe579950d1a807b7521d73e868fac14b3362725c339938db9ba9"
	I1123 08:55:24.391619 1225388 cri.go:89] found id: "d7311c2c5699ad0d41a6408dfece98289565e80a60184519834f707726b47a53"
	I1123 08:55:24.391622 1225388 cri.go:89] found id: "7b9b4a6e426f263cb39534651d3c1f27d4fc7d585032ccbec072ae66318023df"
	I1123 08:55:24.391625 1225388 cri.go:89] found id: "247b7aa0c1261bc65c70f1271c4f8036028cf3420d07070ead4ca25228884653"
	I1123 08:55:24.391631 1225388 cri.go:89] found id: "8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb"
	I1123 08:55:24.391648 1225388 cri.go:89] found id: "62de83d6e4fd10e27fd4b0e1f4adf0423f70c4e01537d1a2dd0b9dc5df5f955a"
	I1123 08:55:24.391653 1225388 cri.go:89] found id: ""
	I1123 08:55:24.391705 1225388 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:55:24.406810 1225388 out.go:203] 
	W1123 08:55:24.409859 1225388 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:55:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:55:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:55:24.409880 1225388 out.go:285] * 
	* 
	W1123 08:55:24.418670 1225388 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:55:24.422077 1225388 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-283312 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-283312
helpers_test.go:243: (dbg) docker inspect old-k8s-version-283312:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3",
	        "Created": "2025-11-23T08:53:01.800677774Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1223304,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:54:23.085319934Z",
	            "FinishedAt": "2025-11-23T08:54:22.2600648Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/hosts",
	        "LogPath": "/var/lib/docker/containers/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3-json.log",
	        "Name": "/old-k8s-version-283312",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-283312:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-283312",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3",
	                "LowerDir": "/var/lib/docker/overlay2/f7800adf0bb2faf578ed2bf4a26065d85d982030afc07cc96e1142a50ec29c06-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f7800adf0bb2faf578ed2bf4a26065d85d982030afc07cc96e1142a50ec29c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f7800adf0bb2faf578ed2bf4a26065d85d982030afc07cc96e1142a50ec29c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f7800adf0bb2faf578ed2bf4a26065d85d982030afc07cc96e1142a50ec29c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-283312",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-283312/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-283312",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-283312",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-283312",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ad30ffa09d6b740ec29580ccfd495e589ebb4705fedcbf70b8a48ad53e9303a",
	            "SandboxKey": "/var/run/docker/netns/4ad30ffa09d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34517"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34518"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34521"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34519"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34520"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-283312": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:b4:1c:8d:3f:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c3c4615dfb11778d84973791ecb3bc879152d7ae7a1ee624548096be909deb9",
	                    "EndpointID": "034984b3626f1a8a4657c8933326191e6428adc3b444e2f71f9d8d5e57be688a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-283312",
	                        "205e5ea134d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-283312 -n old-k8s-version-283312
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-283312 -n old-k8s-version-283312: exit status 2 (363.587668ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-283312 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-283312 logs -n 25: (1.353464044s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-082524 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo containerd config dump                                                                                                                                                                                                  │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo crio config                                                                                                                                                                                                             │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ delete  │ -p cilium-082524                                                                                                                                                                                                                              │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │ 23 Nov 25 08:51 UTC │
	│ start   │ -p force-systemd-env-498438 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-498438  │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p kubernetes-upgrade-354226                                                                                                                                                                                                                  │ kubernetes-upgrade-354226 │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p cert-expiration-322507 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-322507    │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p force-systemd-env-498438                                                                                                                                                                                                                   │ force-systemd-env-498438  │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p cert-options-194318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-194318       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ ssh     │ cert-options-194318 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-194318       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ ssh     │ -p cert-options-194318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194318       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p cert-options-194318                                                                                                                                                                                                                        │ cert-options-194318       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-283312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │                     │
	│ stop    │ -p old-k8s-version-283312 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-283312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:54 UTC │
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:55 UTC │
	│ image   │ old-k8s-version-283312 image list --format=json                                                                                                                                                                                               │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ pause   │ -p old-k8s-version-283312 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:54:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:54:22.796041 1223176 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:54:22.796214 1223176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:54:22.796245 1223176 out.go:374] Setting ErrFile to fd 2...
	I1123 08:54:22.796264 1223176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:54:22.796519 1223176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:54:22.796903 1223176 out.go:368] Setting JSON to false
	I1123 08:54:22.797865 1223176 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34608,"bootTime":1763853455,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:54:22.798002 1223176 start.go:143] virtualization:  
	I1123 08:54:22.800941 1223176 out.go:179] * [old-k8s-version-283312] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:54:22.804783 1223176 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:54:22.804977 1223176 notify.go:221] Checking for updates...
	I1123 08:54:22.808671 1223176 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:54:22.811564 1223176 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:54:22.814512 1223176 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:54:22.817315 1223176 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:54:22.820231 1223176 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:54:22.823509 1223176 config.go:182] Loaded profile config "old-k8s-version-283312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:54:22.826992 1223176 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 08:54:22.829819 1223176 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:54:22.862566 1223176 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:54:22.862688 1223176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:54:22.919743 1223176 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:54:22.909990754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:54:22.919889 1223176 docker.go:319] overlay module found
	I1123 08:54:22.923061 1223176 out.go:179] * Using the docker driver based on existing profile
	I1123 08:54:22.925933 1223176 start.go:309] selected driver: docker
	I1123 08:54:22.925952 1223176 start.go:927] validating driver "docker" against &{Name:old-k8s-version-283312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:54:22.926098 1223176 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:54:22.926815 1223176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:54:22.996793 1223176 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:54:22.987599484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:54:22.997163 1223176 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:54:22.997194 1223176 cni.go:84] Creating CNI manager for ""
	I1123 08:54:22.997271 1223176 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:54:22.997309 1223176 start.go:353] cluster config:
	{Name:old-k8s-version-283312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:54:23.003110 1223176 out.go:179] * Starting "old-k8s-version-283312" primary control-plane node in "old-k8s-version-283312" cluster
	I1123 08:54:23.006017 1223176 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:54:23.008796 1223176 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:54:23.011682 1223176 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:54:23.011740 1223176 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 08:54:23.011750 1223176 cache.go:65] Caching tarball of preloaded images
	I1123 08:54:23.011785 1223176 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:54:23.011843 1223176 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:54:23.011854 1223176 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 08:54:23.011976 1223176 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/config.json ...
	I1123 08:54:23.031527 1223176 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:54:23.031551 1223176 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:54:23.031565 1223176 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:54:23.031599 1223176 start.go:360] acquireMachinesLock for old-k8s-version-283312: {Name:mk6342c5cc3dd03ef4a67a137840af521342123c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:54:23.031658 1223176 start.go:364] duration metric: took 34.485µs to acquireMachinesLock for "old-k8s-version-283312"
	I1123 08:54:23.031682 1223176 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:54:23.031691 1223176 fix.go:54] fixHost starting: 
	I1123 08:54:23.031964 1223176 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:54:23.049904 1223176 fix.go:112] recreateIfNeeded on old-k8s-version-283312: state=Stopped err=<nil>
	W1123 08:54:23.049933 1223176 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:54:23.053205 1223176 out.go:252] * Restarting existing docker container for "old-k8s-version-283312" ...
	I1123 08:54:23.053282 1223176 cli_runner.go:164] Run: docker start old-k8s-version-283312
	I1123 08:54:23.307351 1223176 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:54:23.328459 1223176 kic.go:430] container "old-k8s-version-283312" state is running.
	I1123 08:54:23.328876 1223176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-283312
	I1123 08:54:23.371658 1223176 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/config.json ...
	I1123 08:54:23.371891 1223176 machine.go:94] provisionDockerMachine start ...
	I1123 08:54:23.371952 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:23.393881 1223176 main.go:143] libmachine: Using SSH client type: native
	I1123 08:54:23.395582 1223176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34517 <nil> <nil>}
	I1123 08:54:23.395600 1223176 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:54:23.396603 1223176 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:54:26.546853 1223176 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-283312
	
	I1123 08:54:26.546875 1223176 ubuntu.go:182] provisioning hostname "old-k8s-version-283312"
	I1123 08:54:26.546936 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:26.564565 1223176 main.go:143] libmachine: Using SSH client type: native
	I1123 08:54:26.564873 1223176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34517 <nil> <nil>}
	I1123 08:54:26.564890 1223176 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-283312 && echo "old-k8s-version-283312" | sudo tee /etc/hostname
	I1123 08:54:26.728847 1223176 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-283312
	
	I1123 08:54:26.728935 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:26.747026 1223176 main.go:143] libmachine: Using SSH client type: native
	I1123 08:54:26.747367 1223176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34517 <nil> <nil>}
	I1123 08:54:26.747390 1223176 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-283312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-283312/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-283312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:54:26.895355 1223176 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:54:26.895381 1223176 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:54:26.895411 1223176 ubuntu.go:190] setting up certificates
	I1123 08:54:26.895421 1223176 provision.go:84] configureAuth start
	I1123 08:54:26.895493 1223176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-283312
	I1123 08:54:26.911996 1223176 provision.go:143] copyHostCerts
	I1123 08:54:26.912074 1223176 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:54:26.912091 1223176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:54:26.912167 1223176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:54:26.912262 1223176 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:54:26.912270 1223176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:54:26.912301 1223176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:54:26.912355 1223176 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:54:26.912363 1223176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:54:26.912385 1223176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:54:26.912441 1223176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-283312 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-283312]
	I1123 08:54:27.185383 1223176 provision.go:177] copyRemoteCerts
	I1123 08:54:27.185449 1223176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:54:27.185495 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:27.203168 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:27.307149 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:54:27.325216 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 08:54:27.343357 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:54:27.360855 1223176 provision.go:87] duration metric: took 465.408908ms to configureAuth
	I1123 08:54:27.360885 1223176 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:54:27.361074 1223176 config.go:182] Loaded profile config "old-k8s-version-283312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:54:27.361199 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:27.382609 1223176 main.go:143] libmachine: Using SSH client type: native
	I1123 08:54:27.383052 1223176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34517 <nil> <nil>}
	I1123 08:54:27.383083 1223176 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:54:27.740021 1223176 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:54:27.740048 1223176 machine.go:97] duration metric: took 4.368146679s to provisionDockerMachine
	I1123 08:54:27.740059 1223176 start.go:293] postStartSetup for "old-k8s-version-283312" (driver="docker")
	I1123 08:54:27.740070 1223176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:54:27.740133 1223176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:54:27.740176 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:27.759020 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:27.862686 1223176 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:54:27.865746 1223176 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:54:27.865774 1223176 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:54:27.865804 1223176 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:54:27.865902 1223176 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:54:27.865985 1223176 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:54:27.866091 1223176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:54:27.873201 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:54:27.890078 1223176 start.go:296] duration metric: took 150.004726ms for postStartSetup
	I1123 08:54:27.890204 1223176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:54:27.890267 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:27.906955 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:28.010415 1223176 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:54:28.015509 1223176 fix.go:56] duration metric: took 4.983810969s for fixHost
	I1123 08:54:28.015538 1223176 start.go:83] releasing machines lock for "old-k8s-version-283312", held for 4.983865589s
	I1123 08:54:28.015615 1223176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-283312
	I1123 08:54:28.033105 1223176 ssh_runner.go:195] Run: cat /version.json
	I1123 08:54:28.033127 1223176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:54:28.033161 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:28.033213 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:28.057814 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:28.068870 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:28.166665 1223176 ssh_runner.go:195] Run: systemctl --version
	I1123 08:54:28.255249 1223176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:54:28.289516 1223176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:54:28.293707 1223176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:54:28.293819 1223176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:54:28.301119 1223176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:54:28.301153 1223176 start.go:496] detecting cgroup driver to use...
	I1123 08:54:28.301189 1223176 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:54:28.301247 1223176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:54:28.315563 1223176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:54:28.328236 1223176 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:54:28.328310 1223176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:54:28.343291 1223176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:54:28.356400 1223176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:54:28.477730 1223176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:54:28.601634 1223176 docker.go:234] disabling docker service ...
	I1123 08:54:28.601747 1223176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:54:28.616306 1223176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:54:28.628956 1223176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:54:28.752975 1223176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:54:28.869799 1223176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:54:28.883165 1223176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:54:28.901258 1223176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1123 08:54:28.901339 1223176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.909857 1223176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:54:28.909960 1223176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.919646 1223176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.927760 1223176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.943230 1223176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:54:28.951387 1223176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.960728 1223176 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.968468 1223176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.976789 1223176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:54:28.983810 1223176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:54:28.991072 1223176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:54:29.107863 1223176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:54:29.286136 1223176 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:54:29.286252 1223176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:54:29.289888 1223176 start.go:564] Will wait 60s for crictl version
	I1123 08:54:29.289948 1223176 ssh_runner.go:195] Run: which crictl
	I1123 08:54:29.293289 1223176 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:54:29.321076 1223176 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:54:29.321185 1223176 ssh_runner.go:195] Run: crio --version
	I1123 08:54:29.353156 1223176 ssh_runner.go:195] Run: crio --version
	I1123 08:54:29.391298 1223176 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1123 08:54:29.394201 1223176 cli_runner.go:164] Run: docker network inspect old-k8s-version-283312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:54:29.410331 1223176 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:54:29.414049 1223176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:54:29.423335 1223176 kubeadm.go:884] updating cluster {Name:old-k8s-version-283312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:54:29.423696 1223176 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:54:29.423808 1223176 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:54:29.463176 1223176 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:54:29.463238 1223176 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:54:29.463321 1223176 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:54:29.489255 1223176 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:54:29.489280 1223176 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:54:29.489289 1223176 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1123 08:54:29.489392 1223176 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-283312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:54:29.489471 1223176 ssh_runner.go:195] Run: crio config
	I1123 08:54:29.542061 1223176 cni.go:84] Creating CNI manager for ""
	I1123 08:54:29.542086 1223176 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:54:29.542131 1223176 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:54:29.542160 1223176 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-283312 NodeName:old-k8s-version-283312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:54:29.542308 1223176 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-283312"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:54:29.542385 1223176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 08:54:29.550434 1223176 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:54:29.550531 1223176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:54:29.558085 1223176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1123 08:54:29.570629 1223176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:54:29.582971 1223176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1123 08:54:29.595961 1223176 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:54:29.599711 1223176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:54:29.609222 1223176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:54:29.728352 1223176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:54:29.749675 1223176 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312 for IP: 192.168.85.2
	I1123 08:54:29.749693 1223176 certs.go:195] generating shared ca certs ...
	I1123 08:54:29.749708 1223176 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:54:29.749842 1223176 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:54:29.749892 1223176 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:54:29.749899 1223176 certs.go:257] generating profile certs ...
	I1123 08:54:29.749983 1223176 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.key
	I1123 08:54:29.750047 1223176 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.key.0b5b326f
	I1123 08:54:29.750088 1223176 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.key
	I1123 08:54:29.750216 1223176 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:54:29.750247 1223176 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:54:29.750255 1223176 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:54:29.750291 1223176 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:54:29.750316 1223176 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:54:29.750341 1223176 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:54:29.750386 1223176 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:54:29.751012 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:54:29.768492 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:54:29.785757 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:54:29.804030 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:54:29.821360 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 08:54:29.839144 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:54:29.858550 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:54:29.875751 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:54:29.895232 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:54:29.913770 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 08:54:29.935388 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:54:29.961654 1223176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:54:29.977561 1223176 ssh_runner.go:195] Run: openssl version
	I1123 08:54:29.985974 1223176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:54:29.996765 1223176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:54:30.001227 1223176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:54:30.001379 1223176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:54:30.048763 1223176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:54:30.057740 1223176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:54:30.067344 1223176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:54:30.072244 1223176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:54:30.072369 1223176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:54:30.118796 1223176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:54:30.128096 1223176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:54:30.136878 1223176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:54:30.141985 1223176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:54:30.142074 1223176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:54:30.184898 1223176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:54:30.193813 1223176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:54:30.198010 1223176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:54:30.239938 1223176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:54:30.280953 1223176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:54:30.321826 1223176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:54:30.364720 1223176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:54:30.407107 1223176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:54:30.451110 1223176 kubeadm.go:401] StartCluster: {Name:old-k8s-version-283312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:54:30.451215 1223176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:54:30.451287 1223176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:54:30.514126 1223176 cri.go:89] found id: ""
	I1123 08:54:30.514219 1223176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:54:30.526195 1223176 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:54:30.526222 1223176 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:54:30.526279 1223176 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:54:30.540385 1223176 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:54:30.542060 1223176 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-283312" does not appear in /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:54:30.542352 1223176 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-1041293/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-283312" cluster setting kubeconfig missing "old-k8s-version-283312" context setting]
	I1123 08:54:30.542869 1223176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:54:30.544636 1223176 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:54:30.566780 1223176 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 08:54:30.566810 1223176 kubeadm.go:602] duration metric: took 40.581911ms to restartPrimaryControlPlane
	I1123 08:54:30.566821 1223176 kubeadm.go:403] duration metric: took 115.720774ms to StartCluster
	I1123 08:54:30.566858 1223176 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:54:30.566937 1223176 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:54:30.567951 1223176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:54:30.568203 1223176 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:54:30.568602 1223176 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:54:30.568672 1223176 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-283312"
	I1123 08:54:30.568687 1223176 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-283312"
	W1123 08:54:30.568697 1223176 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:54:30.568720 1223176 host.go:66] Checking if "old-k8s-version-283312" exists ...
	I1123 08:54:30.569230 1223176 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:54:30.569457 1223176 config.go:182] Loaded profile config "old-k8s-version-283312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:54:30.569507 1223176 addons.go:70] Setting dashboard=true in profile "old-k8s-version-283312"
	I1123 08:54:30.569516 1223176 addons.go:239] Setting addon dashboard=true in "old-k8s-version-283312"
	W1123 08:54:30.569530 1223176 addons.go:248] addon dashboard should already be in state true
	I1123 08:54:30.569551 1223176 host.go:66] Checking if "old-k8s-version-283312" exists ...
	I1123 08:54:30.569998 1223176 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:54:30.570730 1223176 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-283312"
	I1123 08:54:30.570751 1223176 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-283312"
	I1123 08:54:30.571064 1223176 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:54:30.576246 1223176 out.go:179] * Verifying Kubernetes components...
	I1123 08:54:30.583287 1223176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:54:30.628030 1223176 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:54:30.628180 1223176 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:54:30.630430 1223176 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-283312"
	W1123 08:54:30.630549 1223176 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:54:30.630577 1223176 host.go:66] Checking if "old-k8s-version-283312" exists ...
	I1123 08:54:30.631004 1223176 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:54:30.631490 1223176 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:54:30.631506 1223176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:54:30.631549 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:30.638001 1223176 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 08:54:30.641287 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:54:30.641315 1223176 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:54:30.641380 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:30.691505 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:30.705604 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:30.705623 1223176 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:54:30.705694 1223176 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:54:30.705794 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:30.732906 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:30.900993 1223176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:54:30.984244 1223176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:54:31.017381 1223176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:54:31.087759 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:54:31.087793 1223176 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:54:31.121821 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:54:31.121854 1223176 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:54:31.211595 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:54:31.211625 1223176 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:54:31.343338 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:54:31.343372 1223176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:54:31.399603 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:54:31.399634 1223176 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:54:31.422465 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:54:31.422504 1223176 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:54:31.446212 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:54:31.446247 1223176 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:54:31.465514 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:54:31.465546 1223176 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:54:31.495270 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:54:31.495300 1223176 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:54:31.520159 1223176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:54:36.650595 1223176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.749569552s)
	I1123 08:54:36.650920 1223176 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.66664842s)
	I1123 08:54:36.650951 1223176 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-283312" to be "Ready" ...
	I1123 08:54:36.700625 1223176 node_ready.go:49] node "old-k8s-version-283312" is "Ready"
	I1123 08:54:36.700653 1223176 node_ready.go:38] duration metric: took 49.689804ms for node "old-k8s-version-283312" to be "Ready" ...
	I1123 08:54:36.700667 1223176 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:54:36.700722 1223176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:54:37.274872 1223176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.257454266s)
	I1123 08:54:37.744799 1223176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.224596598s)
	I1123 08:54:37.744908 1223176 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.044164421s)
	I1123 08:54:37.745080 1223176 api_server.go:72] duration metric: took 7.176839071s to wait for apiserver process to appear ...
	I1123 08:54:37.745096 1223176 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:54:37.745114 1223176 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:54:37.748123 1223176 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-283312 addons enable metrics-server
	
	I1123 08:54:37.751057 1223176 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 08:54:37.754022 1223176 addons.go:530] duration metric: took 7.185420554s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 08:54:37.754637 1223176 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 08:54:37.756026 1223176 api_server.go:141] control plane version: v1.28.0
	I1123 08:54:37.756050 1223176 api_server.go:131] duration metric: took 10.947425ms to wait for apiserver health ...
	I1123 08:54:37.756059 1223176 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:54:37.759379 1223176 system_pods.go:59] 8 kube-system pods found
	I1123 08:54:37.759423 1223176 system_pods.go:61] "coredns-5dd5756b68-mpf62" [29956376-ee4e-402e-98dc-864a4ff169d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:54:37.759433 1223176 system_pods.go:61] "etcd-old-k8s-version-283312" [171ec724-181b-4c1c-814b-7b3eb801b010] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:54:37.759439 1223176 system_pods.go:61] "kindnet-fnbgj" [ff60f979-e04b-41da-8682-971a31d72da3] Running
	I1123 08:54:37.759447 1223176 system_pods.go:61] "kube-apiserver-old-k8s-version-283312" [68187f7b-ab9d-4cda-97c7-0559bc9c6b8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:54:37.759453 1223176 system_pods.go:61] "kube-controller-manager-old-k8s-version-283312" [6824fd9a-3bcc-4856-b840-5f6c6866e870] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:54:37.759477 1223176 system_pods.go:61] "kube-proxy-5w4q4" [886c8da3-dfce-4d49-b73c-6799d52d1028] Running
	I1123 08:54:37.759483 1223176 system_pods.go:61] "kube-scheduler-old-k8s-version-283312" [e1a93883-3c97-4c10-abcc-8917c5752ebf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:54:37.759487 1223176 system_pods.go:61] "storage-provisioner" [f8356741-0113-4d0f-b602-081220c219b4] Running
	I1123 08:54:37.759494 1223176 system_pods.go:74] duration metric: took 3.426942ms to wait for pod list to return data ...
	I1123 08:54:37.759504 1223176 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:54:37.761901 1223176 default_sa.go:45] found service account: "default"
	I1123 08:54:37.761925 1223176 default_sa.go:55] duration metric: took 2.414613ms for default service account to be created ...
	I1123 08:54:37.761935 1223176 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:54:37.765620 1223176 system_pods.go:86] 8 kube-system pods found
	I1123 08:54:37.765652 1223176 system_pods.go:89] "coredns-5dd5756b68-mpf62" [29956376-ee4e-402e-98dc-864a4ff169d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:54:37.765662 1223176 system_pods.go:89] "etcd-old-k8s-version-283312" [171ec724-181b-4c1c-814b-7b3eb801b010] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:54:37.765667 1223176 system_pods.go:89] "kindnet-fnbgj" [ff60f979-e04b-41da-8682-971a31d72da3] Running
	I1123 08:54:37.765675 1223176 system_pods.go:89] "kube-apiserver-old-k8s-version-283312" [68187f7b-ab9d-4cda-97c7-0559bc9c6b8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:54:37.765687 1223176 system_pods.go:89] "kube-controller-manager-old-k8s-version-283312" [6824fd9a-3bcc-4856-b840-5f6c6866e870] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:54:37.765694 1223176 system_pods.go:89] "kube-proxy-5w4q4" [886c8da3-dfce-4d49-b73c-6799d52d1028] Running
	I1123 08:54:37.765706 1223176 system_pods.go:89] "kube-scheduler-old-k8s-version-283312" [e1a93883-3c97-4c10-abcc-8917c5752ebf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:54:37.765716 1223176 system_pods.go:89] "storage-provisioner" [f8356741-0113-4d0f-b602-081220c219b4] Running
	I1123 08:54:37.765723 1223176 system_pods.go:126] duration metric: took 3.782362ms to wait for k8s-apps to be running ...
	I1123 08:54:37.765731 1223176 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:54:37.765803 1223176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:54:37.779379 1223176 system_svc.go:56] duration metric: took 13.639618ms WaitForService to wait for kubelet
	I1123 08:54:37.779408 1223176 kubeadm.go:587] duration metric: took 7.211175007s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:54:37.779426 1223176 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:54:37.782442 1223176 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:54:37.782471 1223176 node_conditions.go:123] node cpu capacity is 2
	I1123 08:54:37.782483 1223176 node_conditions.go:105] duration metric: took 3.052142ms to run NodePressure ...
	I1123 08:54:37.782496 1223176 start.go:242] waiting for startup goroutines ...
	I1123 08:54:37.782503 1223176 start.go:247] waiting for cluster config update ...
	I1123 08:54:37.782514 1223176 start.go:256] writing updated cluster config ...
	I1123 08:54:37.782805 1223176 ssh_runner.go:195] Run: rm -f paused
	I1123 08:54:37.786726 1223176 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:54:37.791110 1223176 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mpf62" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:54:39.796606 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:41.796754 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:43.797140 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:46.296390 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:48.299069 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:50.797866 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:52.799034 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:55.296738 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:57.298203 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:59.797538 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:55:02.297252 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:55:04.796551 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:55:06.797317 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	I1123 08:55:07.799004 1223176 pod_ready.go:94] pod "coredns-5dd5756b68-mpf62" is "Ready"
	I1123 08:55:07.799032 1223176 pod_ready.go:86] duration metric: took 30.007894855s for pod "coredns-5dd5756b68-mpf62" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:07.802736 1223176 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:07.810647 1223176 pod_ready.go:94] pod "etcd-old-k8s-version-283312" is "Ready"
	I1123 08:55:07.810673 1223176 pod_ready.go:86] duration metric: took 7.909173ms for pod "etcd-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:07.813562 1223176 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:07.818046 1223176 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-283312" is "Ready"
	I1123 08:55:07.818113 1223176 pod_ready.go:86] duration metric: took 4.487573ms for pod "kube-apiserver-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:07.821115 1223176 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:07.994191 1223176 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-283312" is "Ready"
	I1123 08:55:07.994220 1223176 pod_ready.go:86] duration metric: took 173.080536ms for pod "kube-controller-manager-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:08.195966 1223176 pod_ready.go:83] waiting for pod "kube-proxy-5w4q4" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:08.594466 1223176 pod_ready.go:94] pod "kube-proxy-5w4q4" is "Ready"
	I1123 08:55:08.594497 1223176 pod_ready.go:86] duration metric: took 398.506124ms for pod "kube-proxy-5w4q4" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:08.795256 1223176 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:09.194206 1223176 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-283312" is "Ready"
	I1123 08:55:09.194236 1223176 pod_ready.go:86] duration metric: took 398.95114ms for pod "kube-scheduler-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:09.194251 1223176 pod_ready.go:40] duration metric: took 31.407490817s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:55:09.254613 1223176 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 08:55:09.257634 1223176 out.go:203] 
	W1123 08:55:09.260656 1223176 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 08:55:09.263530 1223176 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 08:55:09.266346 1223176 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-283312" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:55:13 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:13.982301037Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:55:13 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:13.989268221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:55:13 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:13.990302211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:55:14 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:14.009586004Z" level=info msg="Created container 8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw/dashboard-metrics-scraper" id=2b128219-d82a-4b10-b2bc-7994320985ca name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:55:14 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:14.010710773Z" level=info msg="Starting container: 8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb" id=ca82b164-38c9-4acb-a337-33a83d77ed6f name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:55:14 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:14.012457482Z" level=info msg="Started container" PID=1651 containerID=8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw/dashboard-metrics-scraper id=ca82b164-38c9-4acb-a337-33a83d77ed6f name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d5a8b25de23159a9133fab4506e58f308fc615ba30ae8e4d87bc4e947e0ef3b
	Nov 23 08:55:14 old-k8s-version-283312 conmon[1649]: conmon 8c99e240d0d3c5b09f26 <ninfo>: container 1651 exited with status 1
	Nov 23 08:55:14 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:14.220972558Z" level=info msg="Removing container: f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3" id=f8f848b6-4c8d-4ad6-b045-dcd6921b50d0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:55:14 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:14.228542064Z" level=info msg="Error loading conmon cgroup of container f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3: cgroup deleted" id=f8f848b6-4c8d-4ad6-b045-dcd6921b50d0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:55:14 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:14.231522749Z" level=info msg="Removed container f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw/dashboard-metrics-scraper" id=f8f848b6-4c8d-4ad6-b045-dcd6921b50d0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.757395992Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.763225427Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.763259215Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.763283879Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.767087392Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.767120532Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.767295666Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.770308104Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.770338954Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.7703589Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.774001417Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.774041227Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.774065596Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.777029001Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.777073308Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8c99e240d0d3c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   2                   0d5a8b25de231       dashboard-metrics-scraper-5f989dc9cf-s72zw       kubernetes-dashboard
	55f4a448b0d3e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   6b239fcfc2bd2       storage-provisioner                              kube-system
	62de83d6e4fd1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   27 seconds ago      Running             kubernetes-dashboard        0                   06ec79548f5d7       kubernetes-dashboard-8694d4445c-6t89s            kubernetes-dashboard
	fcf4f481baec7       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           48 seconds ago      Running             coredns                     1                   18a7d1c50d93c       coredns-5dd5756b68-mpf62                         kube-system
	b31f17b2d91d4       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           49 seconds ago      Running             kube-proxy                  1                   262f20c5ca042       kube-proxy-5w4q4                                 kube-system
	63c05087bc349       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   6b239fcfc2bd2       storage-provisioner                              kube-system
	6adc196492ef0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   b8380778355eb       busybox                                          default
	ce8860867859e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   53d1d20b4828b       kindnet-fnbgj                                    kube-system
	ca452ae3435ab       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           54 seconds ago      Running             kube-scheduler              1                   6a304d21a140f       kube-scheduler-old-k8s-version-283312            kube-system
	d7311c2c5699a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           54 seconds ago      Running             kube-apiserver              1                   4e058b9a054d0       kube-apiserver-old-k8s-version-283312            kube-system
	7b9b4a6e426f2       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           54 seconds ago      Running             kube-controller-manager     1                   7f7ce9cd9190a       kube-controller-manager-old-k8s-version-283312   kube-system
	247b7aa0c1261       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           54 seconds ago      Running             etcd                        1                   7985141bf10aa       etcd-old-k8s-version-283312                      kube-system
	
	
	==> coredns [fcf4f481baec79c0761b307b5212215829faaa625e6e489cf694d8fb1d2d4062] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42031 - 9329 "HINFO IN 5664389183855982145.3112708891388839210. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004477112s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-283312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-283312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=old-k8s-version-283312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_53_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:53:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-283312
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:55:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:55:05 +0000   Sun, 23 Nov 2025 08:53:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:55:05 +0000   Sun, 23 Nov 2025 08:53:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:55:05 +0000   Sun, 23 Nov 2025 08:53:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:55:05 +0000   Sun, 23 Nov 2025 08:53:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-283312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                392eb6cc-4f42-4cea-8c55-b6ca8bbf6612
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-5dd5756b68-mpf62                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-old-k8s-version-283312                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-fnbgj                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-283312             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-283312    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-5w4q4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-283312             100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-s72zw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-6t89s             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node old-k8s-version-283312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node old-k8s-version-283312 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node old-k8s-version-283312 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node old-k8s-version-283312 event: Registered Node old-k8s-version-283312 in Controller
	  Normal  NodeReady                89s                kubelet          Node old-k8s-version-283312 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 56s)  kubelet          Node old-k8s-version-283312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 56s)  kubelet          Node old-k8s-version-283312 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 56s)  kubelet          Node old-k8s-version-283312 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                node-controller  Node old-k8s-version-283312 event: Registered Node old-k8s-version-283312 in Controller
	
	
	==> dmesg <==
	[Nov23 08:28] overlayfs: idmapped layers are currently not supported
	[Nov23 08:32] overlayfs: idmapped layers are currently not supported
	[Nov23 08:33] overlayfs: idmapped layers are currently not supported
	[Nov23 08:34] overlayfs: idmapped layers are currently not supported
	[Nov23 08:35] overlayfs: idmapped layers are currently not supported
	[Nov23 08:36] overlayfs: idmapped layers are currently not supported
	[Nov23 08:37] overlayfs: idmapped layers are currently not supported
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [247b7aa0c1261bc65c70f1271c4f8036028cf3420d07070ead4ca25228884653] <==
	{"level":"info","ts":"2025-11-23T08:54:31.060727Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T08:54:31.060761Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T08:54:31.061046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-23T08:54:31.061686Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-23T08:54:31.062066Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:54:31.063239Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:54:31.073531Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T08:54:31.07992Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T08:54:31.083213Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T08:54:31.083596Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T08:54:31.083692Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T08:54:32.907251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T08:54:32.907373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T08:54:32.907424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T08:54:32.907462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T08:54:32.907492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-23T08:54:32.90753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-23T08:54:32.90756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-23T08:54:32.911374Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-283312 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T08:54:32.911465Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:54:32.912477Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T08:54:32.923212Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:54:32.928758Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-23T08:54:32.931806Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T08:54:32.931886Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 08:55:25 up  9:37,  0 user,  load average: 1.51, 2.84, 2.50
	Linux old-k8s-version-283312 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ce8860867859e5b27abf00bdcc1cc203fb3241543231bf6a3915cb8500c83601] <==
	I1123 08:54:36.540408       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:54:36.540760       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:54:36.540887       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:54:36.540899       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:54:36.540912       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:54:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:54:36.760773       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:54:36.760799       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:54:36.760809       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:54:36.760922       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:55:06.758536       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:55:06.759733       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:55:06.760811       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 08:55:06.760864       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 08:55:07.961207       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:55:07.961306       1 metrics.go:72] Registering metrics
	I1123 08:55:07.961383       1 controller.go:711] "Syncing nftables rules"
	I1123 08:55:16.757077       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:55:16.757121       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d7311c2c5699ad0d41a6408dfece98289565e80a60184519834f707726b47a53] <==
	I1123 08:54:35.547243       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 08:54:35.572811       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 08:54:35.572894       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 08:54:35.582556       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 08:54:35.582781       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 08:54:35.589054       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 08:54:35.589100       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:54:35.597298       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 08:54:35.605164       1 aggregator.go:166] initial CRD sync complete...
	I1123 08:54:35.605190       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 08:54:35.605197       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:54:35.605205       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:54:35.607312       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E1123 08:54:35.669869       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:54:36.208321       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:54:37.557037       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 08:54:37.602427       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:54:37.629710       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:54:37.645275       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:54:37.658502       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:54:37.715117       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.21.131"}
	I1123 08:54:37.737768       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.163.237"}
	I1123 08:54:48.609001       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 08:54:48.869941       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:54:48.908189       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7b9b4a6e426f263cb39534651d3c1f27d4fc7d585032ccbec072ae66318023df] <==
	I1123 08:54:48.767412       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-s72zw"
	I1123 08:54:48.773096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="286.744951ms"
	I1123 08:54:48.773213       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-6t89s"
	I1123 08:54:48.773322       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.059µs"
	I1123 08:54:48.794477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="180.004745ms"
	I1123 08:54:48.796992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="180.499559ms"
	I1123 08:54:48.841461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.832363ms"
	I1123 08:54:48.851222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.073067ms"
	I1123 08:54:48.851370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="42.296µs"
	I1123 08:54:48.867236       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.613229ms"
	I1123 08:54:48.867437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.464µs"
	I1123 08:54:48.877285       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1123 08:54:48.924129       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1123 08:54:49.000169       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:54:49.009341       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:54:49.009375       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:54:54.172449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.701µs"
	I1123 08:54:55.178970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.517µs"
	I1123 08:54:56.179134       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.001µs"
	I1123 08:54:58.219669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.032714ms"
	I1123 08:54:58.219882       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.949µs"
	I1123 08:55:07.720108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.045985ms"
	I1123 08:55:07.720551       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.641µs"
	I1123 08:55:14.240049       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.922µs"
	I1123 08:55:19.096480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.969µs"
	
	
	==> kube-proxy [b31f17b2d91d4748333acd703896175fb35e33ee0b0916ca17f1f3d164797f0b] <==
	I1123 08:54:36.855838       1 server_others.go:69] "Using iptables proxy"
	I1123 08:54:36.887721       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1123 08:54:36.940000       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:54:36.942304       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:54:36.942360       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:54:36.942369       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:54:36.942407       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:54:36.942627       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:54:36.942638       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:54:36.949620       1 config.go:188] "Starting service config controller"
	I1123 08:54:36.949640       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:54:36.949659       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:54:36.949663       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:54:36.950071       1 config.go:315] "Starting node config controller"
	I1123 08:54:36.950080       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:54:37.055274       1 shared_informer.go:318] Caches are synced for node config
	I1123 08:54:37.055311       1 shared_informer.go:318] Caches are synced for service config
	I1123 08:54:37.055338       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ca452ae3435abe579950d1a807b7521d73e868fac14b3362725c339938db9ba9] <==
	I1123 08:54:34.062497       1 serving.go:348] Generated self-signed cert in-memory
	I1123 08:54:35.742322       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1123 08:54:35.743252       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:54:35.756807       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1123 08:54:35.757002       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1123 08:54:35.757052       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1123 08:54:35.757114       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1123 08:54:35.764717       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:54:35.765429       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 08:54:35.764985       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:54:35.765562       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1123 08:54:35.857522       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1123 08:54:35.866462       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1123 08:54:35.866470       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:54:48 old-k8s-version-283312 kubelet[779]: I1123 08:54:48.867729     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4vfk\" (UniqueName: \"kubernetes.io/projected/3c611bc7-9e3b-4e30-9e9f-8708b366992b-kube-api-access-v4vfk\") pod \"dashboard-metrics-scraper-5f989dc9cf-s72zw\" (UID: \"3c611bc7-9e3b-4e30-9e9f-8708b366992b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw"
	Nov 23 08:54:48 old-k8s-version-283312 kubelet[779]: I1123 08:54:48.867815     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tlpq\" (UniqueName: \"kubernetes.io/projected/0813d124-6f61-456a-9a7d-79a6b4d2e1a3-kube-api-access-8tlpq\") pod \"kubernetes-dashboard-8694d4445c-6t89s\" (UID: \"0813d124-6f61-456a-9a7d-79a6b4d2e1a3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-6t89s"
	Nov 23 08:54:48 old-k8s-version-283312 kubelet[779]: I1123 08:54:48.867986     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0813d124-6f61-456a-9a7d-79a6b4d2e1a3-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-6t89s\" (UID: \"0813d124-6f61-456a-9a7d-79a6b4d2e1a3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-6t89s"
	Nov 23 08:54:48 old-k8s-version-283312 kubelet[779]: I1123 08:54:48.868067     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3c611bc7-9e3b-4e30-9e9f-8708b366992b-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-s72zw\" (UID: \"3c611bc7-9e3b-4e30-9e9f-8708b366992b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw"
	Nov 23 08:54:49 old-k8s-version-283312 kubelet[779]: W1123 08:54:49.128106     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/crio-06ec79548f5d7b35e91487a68ca973c6808750d2601cde2f0a3e88b74560ecb8 WatchSource:0}: Error finding container 06ec79548f5d7b35e91487a68ca973c6808750d2601cde2f0a3e88b74560ecb8: Status 404 returned error can't find the container with id 06ec79548f5d7b35e91487a68ca973c6808750d2601cde2f0a3e88b74560ecb8
	Nov 23 08:54:54 old-k8s-version-283312 kubelet[779]: I1123 08:54:54.156540     779 scope.go:117] "RemoveContainer" containerID="e57945833a9ee6b5980a83bf2e20c3b61f1920ba70b86fb54ab892deb2203a61"
	Nov 23 08:54:55 old-k8s-version-283312 kubelet[779]: I1123 08:54:55.161288     779 scope.go:117] "RemoveContainer" containerID="e57945833a9ee6b5980a83bf2e20c3b61f1920ba70b86fb54ab892deb2203a61"
	Nov 23 08:54:55 old-k8s-version-283312 kubelet[779]: I1123 08:54:55.161674     779 scope.go:117] "RemoveContainer" containerID="f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3"
	Nov 23 08:54:55 old-k8s-version-283312 kubelet[779]: E1123 08:54:55.161990     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s72zw_kubernetes-dashboard(3c611bc7-9e3b-4e30-9e9f-8708b366992b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw" podUID="3c611bc7-9e3b-4e30-9e9f-8708b366992b"
	Nov 23 08:54:56 old-k8s-version-283312 kubelet[779]: I1123 08:54:56.164506     779 scope.go:117] "RemoveContainer" containerID="f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3"
	Nov 23 08:54:56 old-k8s-version-283312 kubelet[779]: E1123 08:54:56.164913     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s72zw_kubernetes-dashboard(3c611bc7-9e3b-4e30-9e9f-8708b366992b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw" podUID="3c611bc7-9e3b-4e30-9e9f-8708b366992b"
	Nov 23 08:54:59 old-k8s-version-283312 kubelet[779]: I1123 08:54:59.083220     779 scope.go:117] "RemoveContainer" containerID="f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3"
	Nov 23 08:54:59 old-k8s-version-283312 kubelet[779]: E1123 08:54:59.083557     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s72zw_kubernetes-dashboard(3c611bc7-9e3b-4e30-9e9f-8708b366992b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw" podUID="3c611bc7-9e3b-4e30-9e9f-8708b366992b"
	Nov 23 08:55:07 old-k8s-version-283312 kubelet[779]: I1123 08:55:07.200397     779 scope.go:117] "RemoveContainer" containerID="63c05087bc3492d07d07dc6676698eb369b01ebf027a57cb7753312ef9a68e38"
	Nov 23 08:55:07 old-k8s-version-283312 kubelet[779]: I1123 08:55:07.228237     779 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-6t89s" podStartSLOduration=10.326223408 podCreationTimestamp="2025-11-23 08:54:48 +0000 UTC" firstStartedPulling="2025-11-23 08:54:49.132437051 +0000 UTC m=+19.385014484" lastFinishedPulling="2025-11-23 08:54:58.034387219 +0000 UTC m=+28.286964652" observedRunningTime="2025-11-23 08:54:58.201519377 +0000 UTC m=+28.454096901" watchObservedRunningTime="2025-11-23 08:55:07.228173576 +0000 UTC m=+37.480751009"
	Nov 23 08:55:13 old-k8s-version-283312 kubelet[779]: I1123 08:55:13.978570     779 scope.go:117] "RemoveContainer" containerID="f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3"
	Nov 23 08:55:14 old-k8s-version-283312 kubelet[779]: I1123 08:55:14.218553     779 scope.go:117] "RemoveContainer" containerID="f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3"
	Nov 23 08:55:14 old-k8s-version-283312 kubelet[779]: I1123 08:55:14.219251     779 scope.go:117] "RemoveContainer" containerID="8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb"
	Nov 23 08:55:14 old-k8s-version-283312 kubelet[779]: E1123 08:55:14.219653     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s72zw_kubernetes-dashboard(3c611bc7-9e3b-4e30-9e9f-8708b366992b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw" podUID="3c611bc7-9e3b-4e30-9e9f-8708b366992b"
	Nov 23 08:55:19 old-k8s-version-283312 kubelet[779]: I1123 08:55:19.083099     779 scope.go:117] "RemoveContainer" containerID="8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb"
	Nov 23 08:55:19 old-k8s-version-283312 kubelet[779]: E1123 08:55:19.083451     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s72zw_kubernetes-dashboard(3c611bc7-9e3b-4e30-9e9f-8708b366992b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw" podUID="3c611bc7-9e3b-4e30-9e9f-8708b366992b"
	Nov 23 08:55:22 old-k8s-version-283312 kubelet[779]: I1123 08:55:22.489433     779 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 08:55:22 old-k8s-version-283312 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:55:22 old-k8s-version-283312 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:55:22 old-k8s-version-283312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [62de83d6e4fd10e27fd4b0e1f4adf0423f70c4e01537d1a2dd0b9dc5df5f955a] <==
	2025/11/23 08:54:58 Starting overwatch
	2025/11/23 08:54:58 Using namespace: kubernetes-dashboard
	2025/11/23 08:54:58 Using in-cluster config to connect to apiserver
	2025/11/23 08:54:58 Using secret token for csrf signing
	2025/11/23 08:54:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:54:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:54:58 Successful initial request to the apiserver, version: v1.28.0
	2025/11/23 08:54:58 Generating JWE encryption key
	2025/11/23 08:54:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:54:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:54:58 Initializing JWE encryption key from synchronized object
	2025/11/23 08:54:58 Creating in-cluster Sidecar client
	2025/11/23 08:54:58 Serving insecurely on HTTP port: 9090
	2025/11/23 08:54:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [55f4a448b0d3e5f8186ddea06d9e649d1ce7b5dc009cab7ea7b94a06ee6d2337] <==
	I1123 08:55:07.252735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:55:07.266255       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:55:07.266302       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:55:24.662979       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:55:24.663132       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-283312_f61816f6-ed70-498e-beb0-cc6321f05a7b!
	I1123 08:55:24.664423       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69e256c0-2660-429a-be2b-9531ab7aed97", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-283312_f61816f6-ed70-498e-beb0-cc6321f05a7b became leader
	I1123 08:55:24.764175       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-283312_f61816f6-ed70-498e-beb0-cc6321f05a7b!
	
	
	==> storage-provisioner [63c05087bc3492d07d07dc6676698eb369b01ebf027a57cb7753312ef9a68e38] <==
	I1123 08:54:36.625033       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:55:06.626398       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-283312 -n old-k8s-version-283312
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-283312 -n old-k8s-version-283312: exit status 2 (364.999352ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-283312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-283312
helpers_test.go:243: (dbg) docker inspect old-k8s-version-283312:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3",
	        "Created": "2025-11-23T08:53:01.800677774Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1223304,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:54:23.085319934Z",
	            "FinishedAt": "2025-11-23T08:54:22.2600648Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/hosts",
	        "LogPath": "/var/lib/docker/containers/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3-json.log",
	        "Name": "/old-k8s-version-283312",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-283312:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-283312",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3",
	                "LowerDir": "/var/lib/docker/overlay2/f7800adf0bb2faf578ed2bf4a26065d85d982030afc07cc96e1142a50ec29c06-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f7800adf0bb2faf578ed2bf4a26065d85d982030afc07cc96e1142a50ec29c06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f7800adf0bb2faf578ed2bf4a26065d85d982030afc07cc96e1142a50ec29c06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f7800adf0bb2faf578ed2bf4a26065d85d982030afc07cc96e1142a50ec29c06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-283312",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-283312/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-283312",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-283312",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-283312",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ad30ffa09d6b740ec29580ccfd495e589ebb4705fedcbf70b8a48ad53e9303a",
	            "SandboxKey": "/var/run/docker/netns/4ad30ffa09d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34517"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34518"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34521"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34519"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34520"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-283312": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:b4:1c:8d:3f:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c3c4615dfb11778d84973791ecb3bc879152d7ae7a1ee624548096be909deb9",
	                    "EndpointID": "034984b3626f1a8a4657c8933326191e6428adc3b444e2f71f9d8d5e57be688a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-283312",
	                        "205e5ea134d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-283312 -n old-k8s-version-283312
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-283312 -n old-k8s-version-283312: exit status 2 (350.387012ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-283312 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-283312 logs -n 25: (1.306175617s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-082524 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo containerd config dump                                                                                                                                                                                                  │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo crio config                                                                                                                                                                                                             │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ delete  │ -p cilium-082524                                                                                                                                                                                                                              │ cilium-082524             │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │ 23 Nov 25 08:51 UTC │
	│ start   │ -p force-systemd-env-498438 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-498438  │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p kubernetes-upgrade-354226                                                                                                                                                                                                                  │ kubernetes-upgrade-354226 │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p cert-expiration-322507 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-322507    │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p force-systemd-env-498438                                                                                                                                                                                                                   │ force-systemd-env-498438  │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p cert-options-194318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-194318       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ ssh     │ cert-options-194318 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-194318       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ ssh     │ -p cert-options-194318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194318       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p cert-options-194318                                                                                                                                                                                                                        │ cert-options-194318       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-283312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │                     │
	│ stop    │ -p old-k8s-version-283312 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-283312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:54 UTC │
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:55 UTC │
	│ image   │ old-k8s-version-283312 image list --format=json                                                                                                                                                                                               │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ pause   │ -p old-k8s-version-283312 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-283312    │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:54:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:54:22.796041 1223176 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:54:22.796214 1223176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:54:22.796245 1223176 out.go:374] Setting ErrFile to fd 2...
	I1123 08:54:22.796264 1223176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:54:22.796519 1223176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:54:22.796903 1223176 out.go:368] Setting JSON to false
	I1123 08:54:22.797865 1223176 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34608,"bootTime":1763853455,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:54:22.798002 1223176 start.go:143] virtualization:  
	I1123 08:54:22.800941 1223176 out.go:179] * [old-k8s-version-283312] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:54:22.804783 1223176 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:54:22.804977 1223176 notify.go:221] Checking for updates...
	I1123 08:54:22.808671 1223176 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:54:22.811564 1223176 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:54:22.814512 1223176 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:54:22.817315 1223176 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:54:22.820231 1223176 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:54:22.823509 1223176 config.go:182] Loaded profile config "old-k8s-version-283312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:54:22.826992 1223176 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 08:54:22.829819 1223176 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:54:22.862566 1223176 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:54:22.862688 1223176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:54:22.919743 1223176 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:54:22.909990754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:54:22.919889 1223176 docker.go:319] overlay module found
	I1123 08:54:22.923061 1223176 out.go:179] * Using the docker driver based on existing profile
	I1123 08:54:22.925933 1223176 start.go:309] selected driver: docker
	I1123 08:54:22.925952 1223176 start.go:927] validating driver "docker" against &{Name:old-k8s-version-283312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:54:22.926098 1223176 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:54:22.926815 1223176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:54:22.996793 1223176 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:54:22.987599484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:54:22.997163 1223176 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:54:22.997194 1223176 cni.go:84] Creating CNI manager for ""
	I1123 08:54:22.997271 1223176 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:54:22.997309 1223176 start.go:353] cluster config:
	{Name:old-k8s-version-283312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:54:23.003110 1223176 out.go:179] * Starting "old-k8s-version-283312" primary control-plane node in "old-k8s-version-283312" cluster
	I1123 08:54:23.006017 1223176 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:54:23.008796 1223176 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:54:23.011682 1223176 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:54:23.011740 1223176 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 08:54:23.011750 1223176 cache.go:65] Caching tarball of preloaded images
	I1123 08:54:23.011785 1223176 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:54:23.011843 1223176 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:54:23.011854 1223176 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 08:54:23.011976 1223176 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/config.json ...
	I1123 08:54:23.031527 1223176 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:54:23.031551 1223176 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:54:23.031565 1223176 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:54:23.031599 1223176 start.go:360] acquireMachinesLock for old-k8s-version-283312: {Name:mk6342c5cc3dd03ef4a67a137840af521342123c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:54:23.031658 1223176 start.go:364] duration metric: took 34.485µs to acquireMachinesLock for "old-k8s-version-283312"
	I1123 08:54:23.031682 1223176 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:54:23.031691 1223176 fix.go:54] fixHost starting: 
	I1123 08:54:23.031964 1223176 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:54:23.049904 1223176 fix.go:112] recreateIfNeeded on old-k8s-version-283312: state=Stopped err=<nil>
	W1123 08:54:23.049933 1223176 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:54:23.053205 1223176 out.go:252] * Restarting existing docker container for "old-k8s-version-283312" ...
	I1123 08:54:23.053282 1223176 cli_runner.go:164] Run: docker start old-k8s-version-283312
	I1123 08:54:23.307351 1223176 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:54:23.328459 1223176 kic.go:430] container "old-k8s-version-283312" state is running.
	I1123 08:54:23.328876 1223176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-283312
	I1123 08:54:23.371658 1223176 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/config.json ...
	I1123 08:54:23.371891 1223176 machine.go:94] provisionDockerMachine start ...
	I1123 08:54:23.371952 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:23.393881 1223176 main.go:143] libmachine: Using SSH client type: native
	I1123 08:54:23.395582 1223176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34517 <nil> <nil>}
	I1123 08:54:23.395600 1223176 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:54:23.396603 1223176 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:54:26.546853 1223176 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-283312
	
	I1123 08:54:26.546875 1223176 ubuntu.go:182] provisioning hostname "old-k8s-version-283312"
	I1123 08:54:26.546936 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:26.564565 1223176 main.go:143] libmachine: Using SSH client type: native
	I1123 08:54:26.564873 1223176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34517 <nil> <nil>}
	I1123 08:54:26.564890 1223176 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-283312 && echo "old-k8s-version-283312" | sudo tee /etc/hostname
	I1123 08:54:26.728847 1223176 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-283312
	
	I1123 08:54:26.728935 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:26.747026 1223176 main.go:143] libmachine: Using SSH client type: native
	I1123 08:54:26.747367 1223176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34517 <nil> <nil>}
	I1123 08:54:26.747390 1223176 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-283312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-283312/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-283312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:54:26.895355 1223176 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:54:26.895381 1223176 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:54:26.895411 1223176 ubuntu.go:190] setting up certificates
	I1123 08:54:26.895421 1223176 provision.go:84] configureAuth start
	I1123 08:54:26.895493 1223176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-283312
	I1123 08:54:26.911996 1223176 provision.go:143] copyHostCerts
	I1123 08:54:26.912074 1223176 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:54:26.912091 1223176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:54:26.912167 1223176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:54:26.912262 1223176 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:54:26.912270 1223176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:54:26.912301 1223176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:54:26.912355 1223176 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:54:26.912363 1223176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:54:26.912385 1223176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:54:26.912441 1223176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-283312 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-283312]
	I1123 08:54:27.185383 1223176 provision.go:177] copyRemoteCerts
	I1123 08:54:27.185449 1223176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:54:27.185495 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:27.203168 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:27.307149 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:54:27.325216 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 08:54:27.343357 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:54:27.360855 1223176 provision.go:87] duration metric: took 465.408908ms to configureAuth
	I1123 08:54:27.360885 1223176 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:54:27.361074 1223176 config.go:182] Loaded profile config "old-k8s-version-283312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:54:27.361199 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:27.382609 1223176 main.go:143] libmachine: Using SSH client type: native
	I1123 08:54:27.383052 1223176 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34517 <nil> <nil>}
	I1123 08:54:27.383083 1223176 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:54:27.740021 1223176 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:54:27.740048 1223176 machine.go:97] duration metric: took 4.368146679s to provisionDockerMachine
	I1123 08:54:27.740059 1223176 start.go:293] postStartSetup for "old-k8s-version-283312" (driver="docker")
	I1123 08:54:27.740070 1223176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:54:27.740133 1223176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:54:27.740176 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:27.759020 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:27.862686 1223176 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:54:27.865746 1223176 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:54:27.865774 1223176 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:54:27.865804 1223176 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:54:27.865902 1223176 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:54:27.865985 1223176 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:54:27.866091 1223176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:54:27.873201 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:54:27.890078 1223176 start.go:296] duration metric: took 150.004726ms for postStartSetup
	I1123 08:54:27.890204 1223176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:54:27.890267 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:27.906955 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:28.010415 1223176 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:54:28.015509 1223176 fix.go:56] duration metric: took 4.983810969s for fixHost
	I1123 08:54:28.015538 1223176 start.go:83] releasing machines lock for "old-k8s-version-283312", held for 4.983865589s
	I1123 08:54:28.015615 1223176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-283312
	I1123 08:54:28.033105 1223176 ssh_runner.go:195] Run: cat /version.json
	I1123 08:54:28.033127 1223176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:54:28.033161 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:28.033213 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:28.057814 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:28.068870 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:28.166665 1223176 ssh_runner.go:195] Run: systemctl --version
	I1123 08:54:28.255249 1223176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:54:28.289516 1223176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:54:28.293707 1223176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:54:28.293819 1223176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:54:28.301119 1223176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:54:28.301153 1223176 start.go:496] detecting cgroup driver to use...
	I1123 08:54:28.301189 1223176 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:54:28.301247 1223176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:54:28.315563 1223176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:54:28.328236 1223176 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:54:28.328310 1223176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:54:28.343291 1223176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:54:28.356400 1223176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:54:28.477730 1223176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:54:28.601634 1223176 docker.go:234] disabling docker service ...
	I1123 08:54:28.601747 1223176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:54:28.616306 1223176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:54:28.628956 1223176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:54:28.752975 1223176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:54:28.869799 1223176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:54:28.883165 1223176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:54:28.901258 1223176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1123 08:54:28.901339 1223176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.909857 1223176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:54:28.909960 1223176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.919646 1223176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.927760 1223176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.943230 1223176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:54:28.951387 1223176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.960728 1223176 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.968468 1223176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:54:28.976789 1223176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:54:28.983810 1223176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:54:28.991072 1223176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:54:29.107863 1223176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:54:29.286136 1223176 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:54:29.286252 1223176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:54:29.289888 1223176 start.go:564] Will wait 60s for crictl version
	I1123 08:54:29.289948 1223176 ssh_runner.go:195] Run: which crictl
	I1123 08:54:29.293289 1223176 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:54:29.321076 1223176 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:54:29.321185 1223176 ssh_runner.go:195] Run: crio --version
	I1123 08:54:29.353156 1223176 ssh_runner.go:195] Run: crio --version
	I1123 08:54:29.391298 1223176 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1123 08:54:29.394201 1223176 cli_runner.go:164] Run: docker network inspect old-k8s-version-283312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:54:29.410331 1223176 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:54:29.414049 1223176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:54:29.423335 1223176 kubeadm.go:884] updating cluster {Name:old-k8s-version-283312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:54:29.423696 1223176 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:54:29.423808 1223176 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:54:29.463176 1223176 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:54:29.463238 1223176 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:54:29.463321 1223176 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:54:29.489255 1223176 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:54:29.489280 1223176 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:54:29.489289 1223176 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1123 08:54:29.489392 1223176 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-283312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:54:29.489471 1223176 ssh_runner.go:195] Run: crio config
	I1123 08:54:29.542061 1223176 cni.go:84] Creating CNI manager for ""
	I1123 08:54:29.542086 1223176 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:54:29.542131 1223176 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:54:29.542160 1223176 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-283312 NodeName:old-k8s-version-283312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:54:29.542308 1223176 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-283312"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:54:29.542385 1223176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 08:54:29.550434 1223176 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:54:29.550531 1223176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:54:29.558085 1223176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1123 08:54:29.570629 1223176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:54:29.582971 1223176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1123 08:54:29.595961 1223176 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:54:29.599711 1223176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:54:29.609222 1223176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:54:29.728352 1223176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:54:29.749675 1223176 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312 for IP: 192.168.85.2
	I1123 08:54:29.749693 1223176 certs.go:195] generating shared ca certs ...
	I1123 08:54:29.749708 1223176 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:54:29.749842 1223176 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:54:29.749892 1223176 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:54:29.749899 1223176 certs.go:257] generating profile certs ...
	I1123 08:54:29.749983 1223176 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.key
	I1123 08:54:29.750047 1223176 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.key.0b5b326f
	I1123 08:54:29.750088 1223176 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.key
	I1123 08:54:29.750216 1223176 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:54:29.750247 1223176 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:54:29.750255 1223176 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:54:29.750291 1223176 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:54:29.750316 1223176 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:54:29.750341 1223176 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:54:29.750386 1223176 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:54:29.751012 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:54:29.768492 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:54:29.785757 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:54:29.804030 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:54:29.821360 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 08:54:29.839144 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:54:29.858550 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:54:29.875751 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:54:29.895232 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:54:29.913770 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 08:54:29.935388 1223176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:54:29.961654 1223176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:54:29.977561 1223176 ssh_runner.go:195] Run: openssl version
	I1123 08:54:29.985974 1223176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:54:29.996765 1223176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:54:30.001227 1223176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:54:30.001379 1223176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:54:30.048763 1223176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:54:30.057740 1223176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:54:30.067344 1223176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:54:30.072244 1223176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:54:30.072369 1223176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:54:30.118796 1223176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:54:30.128096 1223176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:54:30.136878 1223176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:54:30.141985 1223176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:54:30.142074 1223176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:54:30.184898 1223176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:54:30.193813 1223176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:54:30.198010 1223176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:54:30.239938 1223176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:54:30.280953 1223176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:54:30.321826 1223176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:54:30.364720 1223176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:54:30.407107 1223176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:54:30.451110 1223176 kubeadm.go:401] StartCluster: {Name:old-k8s-version-283312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-283312 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:54:30.451215 1223176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:54:30.451287 1223176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:54:30.514126 1223176 cri.go:89] found id: ""
	I1123 08:54:30.514219 1223176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:54:30.526195 1223176 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:54:30.526222 1223176 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:54:30.526279 1223176 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:54:30.540385 1223176 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:54:30.542060 1223176 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-283312" does not appear in /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:54:30.542352 1223176 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-1041293/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-283312" cluster setting kubeconfig missing "old-k8s-version-283312" context setting]
	I1123 08:54:30.542869 1223176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:54:30.544636 1223176 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:54:30.566780 1223176 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 08:54:30.566810 1223176 kubeadm.go:602] duration metric: took 40.581911ms to restartPrimaryControlPlane
	I1123 08:54:30.566821 1223176 kubeadm.go:403] duration metric: took 115.720774ms to StartCluster
	I1123 08:54:30.566858 1223176 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:54:30.566937 1223176 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:54:30.567951 1223176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:54:30.568203 1223176 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:54:30.568602 1223176 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:54:30.568672 1223176 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-283312"
	I1123 08:54:30.568687 1223176 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-283312"
	W1123 08:54:30.568697 1223176 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:54:30.568720 1223176 host.go:66] Checking if "old-k8s-version-283312" exists ...
	I1123 08:54:30.569230 1223176 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:54:30.569457 1223176 config.go:182] Loaded profile config "old-k8s-version-283312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:54:30.569507 1223176 addons.go:70] Setting dashboard=true in profile "old-k8s-version-283312"
	I1123 08:54:30.569516 1223176 addons.go:239] Setting addon dashboard=true in "old-k8s-version-283312"
	W1123 08:54:30.569530 1223176 addons.go:248] addon dashboard should already be in state true
	I1123 08:54:30.569551 1223176 host.go:66] Checking if "old-k8s-version-283312" exists ...
	I1123 08:54:30.569998 1223176 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:54:30.570730 1223176 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-283312"
	I1123 08:54:30.570751 1223176 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-283312"
	I1123 08:54:30.571064 1223176 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:54:30.576246 1223176 out.go:179] * Verifying Kubernetes components...
	I1123 08:54:30.583287 1223176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:54:30.628030 1223176 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:54:30.628180 1223176 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:54:30.630430 1223176 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-283312"
	W1123 08:54:30.630549 1223176 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:54:30.630577 1223176 host.go:66] Checking if "old-k8s-version-283312" exists ...
	I1123 08:54:30.631004 1223176 cli_runner.go:164] Run: docker container inspect old-k8s-version-283312 --format={{.State.Status}}
	I1123 08:54:30.631490 1223176 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:54:30.631506 1223176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:54:30.631549 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:30.638001 1223176 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 08:54:30.641287 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:54:30.641315 1223176 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:54:30.641380 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:30.691505 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:30.705604 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:30.705623 1223176 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:54:30.705694 1223176 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:54:30.705794 1223176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-283312
	I1123 08:54:30.732906 1223176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34517 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/old-k8s-version-283312/id_rsa Username:docker}
	I1123 08:54:30.900993 1223176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:54:30.984244 1223176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:54:31.017381 1223176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:54:31.087759 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:54:31.087793 1223176 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:54:31.121821 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:54:31.121854 1223176 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:54:31.211595 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:54:31.211625 1223176 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:54:31.343338 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:54:31.343372 1223176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:54:31.399603 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:54:31.399634 1223176 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:54:31.422465 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:54:31.422504 1223176 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:54:31.446212 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:54:31.446247 1223176 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:54:31.465514 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:54:31.465546 1223176 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:54:31.495270 1223176 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:54:31.495300 1223176 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:54:31.520159 1223176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:54:36.650595 1223176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.749569552s)
	I1123 08:54:36.650920 1223176 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.66664842s)
	I1123 08:54:36.650951 1223176 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-283312" to be "Ready" ...
	I1123 08:54:36.700625 1223176 node_ready.go:49] node "old-k8s-version-283312" is "Ready"
	I1123 08:54:36.700653 1223176 node_ready.go:38] duration metric: took 49.689804ms for node "old-k8s-version-283312" to be "Ready" ...
	I1123 08:54:36.700667 1223176 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:54:36.700722 1223176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:54:37.274872 1223176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.257454266s)
	I1123 08:54:37.744799 1223176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.224596598s)
	I1123 08:54:37.744908 1223176 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.044164421s)
	I1123 08:54:37.745080 1223176 api_server.go:72] duration metric: took 7.176839071s to wait for apiserver process to appear ...
	I1123 08:54:37.745096 1223176 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:54:37.745114 1223176 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:54:37.748123 1223176 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-283312 addons enable metrics-server
	
	I1123 08:54:37.751057 1223176 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 08:54:37.754022 1223176 addons.go:530] duration metric: took 7.185420554s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 08:54:37.754637 1223176 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 08:54:37.756026 1223176 api_server.go:141] control plane version: v1.28.0
	I1123 08:54:37.756050 1223176 api_server.go:131] duration metric: took 10.947425ms to wait for apiserver health ...
	I1123 08:54:37.756059 1223176 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:54:37.759379 1223176 system_pods.go:59] 8 kube-system pods found
	I1123 08:54:37.759423 1223176 system_pods.go:61] "coredns-5dd5756b68-mpf62" [29956376-ee4e-402e-98dc-864a4ff169d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:54:37.759433 1223176 system_pods.go:61] "etcd-old-k8s-version-283312" [171ec724-181b-4c1c-814b-7b3eb801b010] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:54:37.759439 1223176 system_pods.go:61] "kindnet-fnbgj" [ff60f979-e04b-41da-8682-971a31d72da3] Running
	I1123 08:54:37.759447 1223176 system_pods.go:61] "kube-apiserver-old-k8s-version-283312" [68187f7b-ab9d-4cda-97c7-0559bc9c6b8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:54:37.759453 1223176 system_pods.go:61] "kube-controller-manager-old-k8s-version-283312" [6824fd9a-3bcc-4856-b840-5f6c6866e870] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:54:37.759477 1223176 system_pods.go:61] "kube-proxy-5w4q4" [886c8da3-dfce-4d49-b73c-6799d52d1028] Running
	I1123 08:54:37.759483 1223176 system_pods.go:61] "kube-scheduler-old-k8s-version-283312" [e1a93883-3c97-4c10-abcc-8917c5752ebf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:54:37.759487 1223176 system_pods.go:61] "storage-provisioner" [f8356741-0113-4d0f-b602-081220c219b4] Running
	I1123 08:54:37.759494 1223176 system_pods.go:74] duration metric: took 3.426942ms to wait for pod list to return data ...
	I1123 08:54:37.759504 1223176 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:54:37.761901 1223176 default_sa.go:45] found service account: "default"
	I1123 08:54:37.761925 1223176 default_sa.go:55] duration metric: took 2.414613ms for default service account to be created ...
	I1123 08:54:37.761935 1223176 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:54:37.765620 1223176 system_pods.go:86] 8 kube-system pods found
	I1123 08:54:37.765652 1223176 system_pods.go:89] "coredns-5dd5756b68-mpf62" [29956376-ee4e-402e-98dc-864a4ff169d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:54:37.765662 1223176 system_pods.go:89] "etcd-old-k8s-version-283312" [171ec724-181b-4c1c-814b-7b3eb801b010] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:54:37.765667 1223176 system_pods.go:89] "kindnet-fnbgj" [ff60f979-e04b-41da-8682-971a31d72da3] Running
	I1123 08:54:37.765675 1223176 system_pods.go:89] "kube-apiserver-old-k8s-version-283312" [68187f7b-ab9d-4cda-97c7-0559bc9c6b8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:54:37.765687 1223176 system_pods.go:89] "kube-controller-manager-old-k8s-version-283312" [6824fd9a-3bcc-4856-b840-5f6c6866e870] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:54:37.765694 1223176 system_pods.go:89] "kube-proxy-5w4q4" [886c8da3-dfce-4d49-b73c-6799d52d1028] Running
	I1123 08:54:37.765706 1223176 system_pods.go:89] "kube-scheduler-old-k8s-version-283312" [e1a93883-3c97-4c10-abcc-8917c5752ebf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:54:37.765716 1223176 system_pods.go:89] "storage-provisioner" [f8356741-0113-4d0f-b602-081220c219b4] Running
	I1123 08:54:37.765723 1223176 system_pods.go:126] duration metric: took 3.782362ms to wait for k8s-apps to be running ...
	I1123 08:54:37.765731 1223176 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:54:37.765803 1223176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:54:37.779379 1223176 system_svc.go:56] duration metric: took 13.639618ms WaitForService to wait for kubelet
	I1123 08:54:37.779408 1223176 kubeadm.go:587] duration metric: took 7.211175007s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:54:37.779426 1223176 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:54:37.782442 1223176 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:54:37.782471 1223176 node_conditions.go:123] node cpu capacity is 2
	I1123 08:54:37.782483 1223176 node_conditions.go:105] duration metric: took 3.052142ms to run NodePressure ...
	I1123 08:54:37.782496 1223176 start.go:242] waiting for startup goroutines ...
	I1123 08:54:37.782503 1223176 start.go:247] waiting for cluster config update ...
	I1123 08:54:37.782514 1223176 start.go:256] writing updated cluster config ...
	I1123 08:54:37.782805 1223176 ssh_runner.go:195] Run: rm -f paused
	I1123 08:54:37.786726 1223176 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:54:37.791110 1223176 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mpf62" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:54:39.796606 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:41.796754 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:43.797140 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:46.296390 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:48.299069 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:50.797866 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:52.799034 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:55.296738 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:57.298203 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:54:59.797538 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:55:02.297252 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:55:04.796551 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	W1123 08:55:06.797317 1223176 pod_ready.go:104] pod "coredns-5dd5756b68-mpf62" is not "Ready", error: <nil>
	I1123 08:55:07.799004 1223176 pod_ready.go:94] pod "coredns-5dd5756b68-mpf62" is "Ready"
	I1123 08:55:07.799032 1223176 pod_ready.go:86] duration metric: took 30.007894855s for pod "coredns-5dd5756b68-mpf62" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:07.802736 1223176 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:07.810647 1223176 pod_ready.go:94] pod "etcd-old-k8s-version-283312" is "Ready"
	I1123 08:55:07.810673 1223176 pod_ready.go:86] duration metric: took 7.909173ms for pod "etcd-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:07.813562 1223176 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:07.818046 1223176 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-283312" is "Ready"
	I1123 08:55:07.818113 1223176 pod_ready.go:86] duration metric: took 4.487573ms for pod "kube-apiserver-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:07.821115 1223176 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:07.994191 1223176 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-283312" is "Ready"
	I1123 08:55:07.994220 1223176 pod_ready.go:86] duration metric: took 173.080536ms for pod "kube-controller-manager-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:08.195966 1223176 pod_ready.go:83] waiting for pod "kube-proxy-5w4q4" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:08.594466 1223176 pod_ready.go:94] pod "kube-proxy-5w4q4" is "Ready"
	I1123 08:55:08.594497 1223176 pod_ready.go:86] duration metric: took 398.506124ms for pod "kube-proxy-5w4q4" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:08.795256 1223176 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:09.194206 1223176 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-283312" is "Ready"
	I1123 08:55:09.194236 1223176 pod_ready.go:86] duration metric: took 398.95114ms for pod "kube-scheduler-old-k8s-version-283312" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:55:09.194251 1223176 pod_ready.go:40] duration metric: took 31.407490817s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:55:09.254613 1223176 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 08:55:09.257634 1223176 out.go:203] 
	W1123 08:55:09.260656 1223176 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 08:55:09.263530 1223176 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 08:55:09.266346 1223176 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-283312" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:55:13 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:13.982301037Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:55:13 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:13.989268221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:55:13 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:13.990302211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:55:14 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:14.009586004Z" level=info msg="Created container 8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw/dashboard-metrics-scraper" id=2b128219-d82a-4b10-b2bc-7994320985ca name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:55:14 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:14.010710773Z" level=info msg="Starting container: 8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb" id=ca82b164-38c9-4acb-a337-33a83d77ed6f name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:55:14 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:14.012457482Z" level=info msg="Started container" PID=1651 containerID=8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw/dashboard-metrics-scraper id=ca82b164-38c9-4acb-a337-33a83d77ed6f name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d5a8b25de23159a9133fab4506e58f308fc615ba30ae8e4d87bc4e947e0ef3b
	Nov 23 08:55:14 old-k8s-version-283312 conmon[1649]: conmon 8c99e240d0d3c5b09f26 <ninfo>: container 1651 exited with status 1
	Nov 23 08:55:14 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:14.220972558Z" level=info msg="Removing container: f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3" id=f8f848b6-4c8d-4ad6-b045-dcd6921b50d0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:55:14 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:14.228542064Z" level=info msg="Error loading conmon cgroup of container f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3: cgroup deleted" id=f8f848b6-4c8d-4ad6-b045-dcd6921b50d0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:55:14 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:14.231522749Z" level=info msg="Removed container f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw/dashboard-metrics-scraper" id=f8f848b6-4c8d-4ad6-b045-dcd6921b50d0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.757395992Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.763225427Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.763259215Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.763283879Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.767087392Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.767120532Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.767295666Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.770308104Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.770338954Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.7703589Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.774001417Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.774041227Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.774065596Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.777029001Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:55:16 old-k8s-version-283312 crio[654]: time="2025-11-23T08:55:16.777073308Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8c99e240d0d3c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   0d5a8b25de231       dashboard-metrics-scraper-5f989dc9cf-s72zw       kubernetes-dashboard
	55f4a448b0d3e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   6b239fcfc2bd2       storage-provisioner                              kube-system
	62de83d6e4fd1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   29 seconds ago      Running             kubernetes-dashboard        0                   06ec79548f5d7       kubernetes-dashboard-8694d4445c-6t89s            kubernetes-dashboard
	fcf4f481baec7       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago      Running             coredns                     1                   18a7d1c50d93c       coredns-5dd5756b68-mpf62                         kube-system
	b31f17b2d91d4       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago      Running             kube-proxy                  1                   262f20c5ca042       kube-proxy-5w4q4                                 kube-system
	63c05087bc349       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   6b239fcfc2bd2       storage-provisioner                              kube-system
	6adc196492ef0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   b8380778355eb       busybox                                          default
	ce8860867859e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   53d1d20b4828b       kindnet-fnbgj                                    kube-system
	ca452ae3435ab       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           57 seconds ago      Running             kube-scheduler              1                   6a304d21a140f       kube-scheduler-old-k8s-version-283312            kube-system
	d7311c2c5699a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           57 seconds ago      Running             kube-apiserver              1                   4e058b9a054d0       kube-apiserver-old-k8s-version-283312            kube-system
	7b9b4a6e426f2       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           57 seconds ago      Running             kube-controller-manager     1                   7f7ce9cd9190a       kube-controller-manager-old-k8s-version-283312   kube-system
	247b7aa0c1261       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           57 seconds ago      Running             etcd                        1                   7985141bf10aa       etcd-old-k8s-version-283312                      kube-system
	
	
	==> coredns [fcf4f481baec79c0761b307b5212215829faaa625e6e489cf694d8fb1d2d4062] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42031 - 9329 "HINFO IN 5664389183855982145.3112708891388839210. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004477112s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-283312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-283312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=old-k8s-version-283312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_53_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:53:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-283312
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:55:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:55:05 +0000   Sun, 23 Nov 2025 08:53:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:55:05 +0000   Sun, 23 Nov 2025 08:53:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:55:05 +0000   Sun, 23 Nov 2025 08:53:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:55:05 +0000   Sun, 23 Nov 2025 08:53:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-283312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                392eb6cc-4f42-4cea-8c55-b6ca8bbf6612
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-mpf62                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-old-k8s-version-283312                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-fnbgj                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-283312             250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-283312    200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-5w4q4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-283312             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-s72zw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-6t89s             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node old-k8s-version-283312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node old-k8s-version-283312 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node old-k8s-version-283312 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node old-k8s-version-283312 event: Registered Node old-k8s-version-283312 in Controller
	  Normal  NodeReady                91s                kubelet          Node old-k8s-version-283312 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 58s)  kubelet          Node old-k8s-version-283312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 58s)  kubelet          Node old-k8s-version-283312 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 58s)  kubelet          Node old-k8s-version-283312 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node old-k8s-version-283312 event: Registered Node old-k8s-version-283312 in Controller
	
	
	==> dmesg <==
	[Nov23 08:28] overlayfs: idmapped layers are currently not supported
	[Nov23 08:32] overlayfs: idmapped layers are currently not supported
	[Nov23 08:33] overlayfs: idmapped layers are currently not supported
	[Nov23 08:34] overlayfs: idmapped layers are currently not supported
	[Nov23 08:35] overlayfs: idmapped layers are currently not supported
	[Nov23 08:36] overlayfs: idmapped layers are currently not supported
	[Nov23 08:37] overlayfs: idmapped layers are currently not supported
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [247b7aa0c1261bc65c70f1271c4f8036028cf3420d07070ead4ca25228884653] <==
	{"level":"info","ts":"2025-11-23T08:54:31.060727Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T08:54:31.060761Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T08:54:31.061046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-23T08:54:31.061686Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-23T08:54:31.062066Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:54:31.063239Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:54:31.073531Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T08:54:31.07992Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T08:54:31.083213Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T08:54:31.083596Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T08:54:31.083692Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T08:54:32.907251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T08:54:32.907373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T08:54:32.907424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T08:54:32.907462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T08:54:32.907492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-23T08:54:32.90753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-23T08:54:32.90756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-23T08:54:32.911374Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-283312 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T08:54:32.911465Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:54:32.912477Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T08:54:32.923212Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:54:32.928758Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-23T08:54:32.931806Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T08:54:32.931886Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 08:55:28 up  9:37,  0 user,  load average: 1.47, 2.81, 2.49
	Linux old-k8s-version-283312 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ce8860867859e5b27abf00bdcc1cc203fb3241543231bf6a3915cb8500c83601] <==
	I1123 08:54:36.540408       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:54:36.540760       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:54:36.540887       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:54:36.540899       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:54:36.540912       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:54:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:54:36.760773       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:54:36.760799       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:54:36.760809       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:54:36.760922       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:55:06.758536       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:55:06.759733       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:55:06.760811       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 08:55:06.760864       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 08:55:07.961207       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:55:07.961306       1 metrics.go:72] Registering metrics
	I1123 08:55:07.961383       1 controller.go:711] "Syncing nftables rules"
	I1123 08:55:16.757077       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:55:16.757121       1 main.go:301] handling current node
	I1123 08:55:26.764094       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:55:26.764130       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d7311c2c5699ad0d41a6408dfece98289565e80a60184519834f707726b47a53] <==
	I1123 08:54:35.547243       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 08:54:35.572811       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 08:54:35.572894       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 08:54:35.582556       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 08:54:35.582781       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 08:54:35.589054       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 08:54:35.589100       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:54:35.597298       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 08:54:35.605164       1 aggregator.go:166] initial CRD sync complete...
	I1123 08:54:35.605190       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 08:54:35.605197       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:54:35.605205       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:54:35.607312       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E1123 08:54:35.669869       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:54:36.208321       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:54:37.557037       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 08:54:37.602427       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:54:37.629710       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:54:37.645275       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:54:37.658502       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:54:37.715117       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.21.131"}
	I1123 08:54:37.737768       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.163.237"}
	I1123 08:54:48.609001       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 08:54:48.869941       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:54:48.908189       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7b9b4a6e426f263cb39534651d3c1f27d4fc7d585032ccbec072ae66318023df] <==
	I1123 08:54:48.767412       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-s72zw"
	I1123 08:54:48.773096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="286.744951ms"
	I1123 08:54:48.773213       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-6t89s"
	I1123 08:54:48.773322       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.059µs"
	I1123 08:54:48.794477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="180.004745ms"
	I1123 08:54:48.796992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="180.499559ms"
	I1123 08:54:48.841461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.832363ms"
	I1123 08:54:48.851222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.073067ms"
	I1123 08:54:48.851370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="42.296µs"
	I1123 08:54:48.867236       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.613229ms"
	I1123 08:54:48.867437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.464µs"
	I1123 08:54:48.877285       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1123 08:54:48.924129       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1123 08:54:49.000169       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:54:49.009341       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:54:49.009375       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:54:54.172449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.701µs"
	I1123 08:54:55.178970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.517µs"
	I1123 08:54:56.179134       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.001µs"
	I1123 08:54:58.219669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.032714ms"
	I1123 08:54:58.219882       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.949µs"
	I1123 08:55:07.720108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.045985ms"
	I1123 08:55:07.720551       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.641µs"
	I1123 08:55:14.240049       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.922µs"
	I1123 08:55:19.096480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.969µs"
	
	
	==> kube-proxy [b31f17b2d91d4748333acd703896175fb35e33ee0b0916ca17f1f3d164797f0b] <==
	I1123 08:54:36.855838       1 server_others.go:69] "Using iptables proxy"
	I1123 08:54:36.887721       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1123 08:54:36.940000       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:54:36.942304       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:54:36.942360       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:54:36.942369       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:54:36.942407       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:54:36.942627       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:54:36.942638       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:54:36.949620       1 config.go:188] "Starting service config controller"
	I1123 08:54:36.949640       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:54:36.949659       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:54:36.949663       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:54:36.950071       1 config.go:315] "Starting node config controller"
	I1123 08:54:36.950080       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:54:37.055274       1 shared_informer.go:318] Caches are synced for node config
	I1123 08:54:37.055311       1 shared_informer.go:318] Caches are synced for service config
	I1123 08:54:37.055338       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ca452ae3435abe579950d1a807b7521d73e868fac14b3362725c339938db9ba9] <==
	I1123 08:54:34.062497       1 serving.go:348] Generated self-signed cert in-memory
	I1123 08:54:35.742322       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1123 08:54:35.743252       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:54:35.756807       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1123 08:54:35.757002       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1123 08:54:35.757052       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1123 08:54:35.757114       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1123 08:54:35.764717       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:54:35.765429       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 08:54:35.764985       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:54:35.765562       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1123 08:54:35.857522       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1123 08:54:35.866462       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1123 08:54:35.866470       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:54:48 old-k8s-version-283312 kubelet[779]: I1123 08:54:48.867729     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4vfk\" (UniqueName: \"kubernetes.io/projected/3c611bc7-9e3b-4e30-9e9f-8708b366992b-kube-api-access-v4vfk\") pod \"dashboard-metrics-scraper-5f989dc9cf-s72zw\" (UID: \"3c611bc7-9e3b-4e30-9e9f-8708b366992b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw"
	Nov 23 08:54:48 old-k8s-version-283312 kubelet[779]: I1123 08:54:48.867815     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tlpq\" (UniqueName: \"kubernetes.io/projected/0813d124-6f61-456a-9a7d-79a6b4d2e1a3-kube-api-access-8tlpq\") pod \"kubernetes-dashboard-8694d4445c-6t89s\" (UID: \"0813d124-6f61-456a-9a7d-79a6b4d2e1a3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-6t89s"
	Nov 23 08:54:48 old-k8s-version-283312 kubelet[779]: I1123 08:54:48.867986     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0813d124-6f61-456a-9a7d-79a6b4d2e1a3-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-6t89s\" (UID: \"0813d124-6f61-456a-9a7d-79a6b4d2e1a3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-6t89s"
	Nov 23 08:54:48 old-k8s-version-283312 kubelet[779]: I1123 08:54:48.868067     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3c611bc7-9e3b-4e30-9e9f-8708b366992b-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-s72zw\" (UID: \"3c611bc7-9e3b-4e30-9e9f-8708b366992b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw"
	Nov 23 08:54:49 old-k8s-version-283312 kubelet[779]: W1123 08:54:49.128106     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/205e5ea134d1e260963399a50603431b4ba8ce395d049a3383971de9672848f3/crio-06ec79548f5d7b35e91487a68ca973c6808750d2601cde2f0a3e88b74560ecb8 WatchSource:0}: Error finding container 06ec79548f5d7b35e91487a68ca973c6808750d2601cde2f0a3e88b74560ecb8: Status 404 returned error can't find the container with id 06ec79548f5d7b35e91487a68ca973c6808750d2601cde2f0a3e88b74560ecb8
	Nov 23 08:54:54 old-k8s-version-283312 kubelet[779]: I1123 08:54:54.156540     779 scope.go:117] "RemoveContainer" containerID="e57945833a9ee6b5980a83bf2e20c3b61f1920ba70b86fb54ab892deb2203a61"
	Nov 23 08:54:55 old-k8s-version-283312 kubelet[779]: I1123 08:54:55.161288     779 scope.go:117] "RemoveContainer" containerID="e57945833a9ee6b5980a83bf2e20c3b61f1920ba70b86fb54ab892deb2203a61"
	Nov 23 08:54:55 old-k8s-version-283312 kubelet[779]: I1123 08:54:55.161674     779 scope.go:117] "RemoveContainer" containerID="f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3"
	Nov 23 08:54:55 old-k8s-version-283312 kubelet[779]: E1123 08:54:55.161990     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s72zw_kubernetes-dashboard(3c611bc7-9e3b-4e30-9e9f-8708b366992b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw" podUID="3c611bc7-9e3b-4e30-9e9f-8708b366992b"
	Nov 23 08:54:56 old-k8s-version-283312 kubelet[779]: I1123 08:54:56.164506     779 scope.go:117] "RemoveContainer" containerID="f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3"
	Nov 23 08:54:56 old-k8s-version-283312 kubelet[779]: E1123 08:54:56.164913     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s72zw_kubernetes-dashboard(3c611bc7-9e3b-4e30-9e9f-8708b366992b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw" podUID="3c611bc7-9e3b-4e30-9e9f-8708b366992b"
	Nov 23 08:54:59 old-k8s-version-283312 kubelet[779]: I1123 08:54:59.083220     779 scope.go:117] "RemoveContainer" containerID="f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3"
	Nov 23 08:54:59 old-k8s-version-283312 kubelet[779]: E1123 08:54:59.083557     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s72zw_kubernetes-dashboard(3c611bc7-9e3b-4e30-9e9f-8708b366992b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw" podUID="3c611bc7-9e3b-4e30-9e9f-8708b366992b"
	Nov 23 08:55:07 old-k8s-version-283312 kubelet[779]: I1123 08:55:07.200397     779 scope.go:117] "RemoveContainer" containerID="63c05087bc3492d07d07dc6676698eb369b01ebf027a57cb7753312ef9a68e38"
	Nov 23 08:55:07 old-k8s-version-283312 kubelet[779]: I1123 08:55:07.228237     779 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-6t89s" podStartSLOduration=10.326223408 podCreationTimestamp="2025-11-23 08:54:48 +0000 UTC" firstStartedPulling="2025-11-23 08:54:49.132437051 +0000 UTC m=+19.385014484" lastFinishedPulling="2025-11-23 08:54:58.034387219 +0000 UTC m=+28.286964652" observedRunningTime="2025-11-23 08:54:58.201519377 +0000 UTC m=+28.454096901" watchObservedRunningTime="2025-11-23 08:55:07.228173576 +0000 UTC m=+37.480751009"
	Nov 23 08:55:13 old-k8s-version-283312 kubelet[779]: I1123 08:55:13.978570     779 scope.go:117] "RemoveContainer" containerID="f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3"
	Nov 23 08:55:14 old-k8s-version-283312 kubelet[779]: I1123 08:55:14.218553     779 scope.go:117] "RemoveContainer" containerID="f520624d2f51e038e6961af4664bf4abaea2a25044a1187d43ffab83b630bfa3"
	Nov 23 08:55:14 old-k8s-version-283312 kubelet[779]: I1123 08:55:14.219251     779 scope.go:117] "RemoveContainer" containerID="8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb"
	Nov 23 08:55:14 old-k8s-version-283312 kubelet[779]: E1123 08:55:14.219653     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s72zw_kubernetes-dashboard(3c611bc7-9e3b-4e30-9e9f-8708b366992b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw" podUID="3c611bc7-9e3b-4e30-9e9f-8708b366992b"
	Nov 23 08:55:19 old-k8s-version-283312 kubelet[779]: I1123 08:55:19.083099     779 scope.go:117] "RemoveContainer" containerID="8c99e240d0d3c5b09f26cce84e285d1b5311e5d85caeceee56e98e8d83ab6deb"
	Nov 23 08:55:19 old-k8s-version-283312 kubelet[779]: E1123 08:55:19.083451     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-s72zw_kubernetes-dashboard(3c611bc7-9e3b-4e30-9e9f-8708b366992b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-s72zw" podUID="3c611bc7-9e3b-4e30-9e9f-8708b366992b"
	Nov 23 08:55:22 old-k8s-version-283312 kubelet[779]: I1123 08:55:22.489433     779 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 08:55:22 old-k8s-version-283312 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:55:22 old-k8s-version-283312 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:55:22 old-k8s-version-283312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [62de83d6e4fd10e27fd4b0e1f4adf0423f70c4e01537d1a2dd0b9dc5df5f955a] <==
	2025/11/23 08:54:58 Using namespace: kubernetes-dashboard
	2025/11/23 08:54:58 Using in-cluster config to connect to apiserver
	2025/11/23 08:54:58 Using secret token for csrf signing
	2025/11/23 08:54:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:54:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:54:58 Successful initial request to the apiserver, version: v1.28.0
	2025/11/23 08:54:58 Generating JWE encryption key
	2025/11/23 08:54:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:54:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:54:58 Initializing JWE encryption key from synchronized object
	2025/11/23 08:54:58 Creating in-cluster Sidecar client
	2025/11/23 08:54:58 Serving insecurely on HTTP port: 9090
	2025/11/23 08:54:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:54:58 Starting overwatch
	
	
	==> storage-provisioner [55f4a448b0d3e5f8186ddea06d9e649d1ce7b5dc009cab7ea7b94a06ee6d2337] <==
	I1123 08:55:07.252735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:55:07.266255       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:55:07.266302       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:55:24.662979       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:55:24.663132       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-283312_f61816f6-ed70-498e-beb0-cc6321f05a7b!
	I1123 08:55:24.664423       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69e256c0-2660-429a-be2b-9531ab7aed97", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-283312_f61816f6-ed70-498e-beb0-cc6321f05a7b became leader
	I1123 08:55:24.764175       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-283312_f61816f6-ed70-498e-beb0-cc6321f05a7b!
	
	
	==> storage-provisioner [63c05087bc3492d07d07dc6676698eb369b01ebf027a57cb7753312ef9a68e38] <==
	I1123 08:54:36.625033       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:55:06.626398       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-283312 -n old-k8s-version-283312
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-283312 -n old-k8s-version-283312: exit status 2 (392.120833ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-283312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-262764 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-262764 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (268.666931ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:57:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-262764 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-262764 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-262764 describe deploy/metrics-server -n kube-system: exit status 1 (89.844596ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-262764 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-262764
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-262764:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c",
	        "Created": "2025-11-23T08:55:37.40456105Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1227173,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:55:37.486551566Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/hostname",
	        "HostsPath": "/var/lib/docker/containers/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/hosts",
	        "LogPath": "/var/lib/docker/containers/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c-json.log",
	        "Name": "/default-k8s-diff-port-262764",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-262764:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-262764",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c",
	                "LowerDir": "/var/lib/docker/overlay2/f72313a8ebe5346b2a4f86d480258d4f1e2db66dfe4fbd251eebdfdd3ddbaac3-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f72313a8ebe5346b2a4f86d480258d4f1e2db66dfe4fbd251eebdfdd3ddbaac3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f72313a8ebe5346b2a4f86d480258d4f1e2db66dfe4fbd251eebdfdd3ddbaac3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f72313a8ebe5346b2a4f86d480258d4f1e2db66dfe4fbd251eebdfdd3ddbaac3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-262764",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-262764/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-262764",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-262764",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-262764",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a6747b25b5760bad857c4539831f7d4164c526bdb920e550a87e493f2b191784",
	            "SandboxKey": "/var/run/docker/netns/a6747b25b576",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34522"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34523"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34526"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34524"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34525"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-262764": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:c4:5f:08:53:86",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a88fa92783a732a39910d80f98969c606d7a2bdb381d5a678aa8210ce1334564",
	                    "EndpointID": "cc7bddf0d8f1b86a22400de7125dd22b7847861e80b5034c0d7900080bb32925",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-262764",
	                        "c3373e1079a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-262764 -n default-k8s-diff-port-262764
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-262764 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-262764 logs -n 25: (1.219189064s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-082524 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-082524                │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-082524 sudo crio config                                                                                                                                                                                                             │ cilium-082524                │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │                     │
	│ delete  │ -p cilium-082524                                                                                                                                                                                                                              │ cilium-082524                │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │ 23 Nov 25 08:51 UTC │
	│ start   │ -p force-systemd-env-498438 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-498438     │ jenkins │ v1.37.0 │ 23 Nov 25 08:51 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p kubernetes-upgrade-354226                                                                                                                                                                                                                  │ kubernetes-upgrade-354226    │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p cert-expiration-322507 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p force-systemd-env-498438                                                                                                                                                                                                                   │ force-systemd-env-498438     │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p cert-options-194318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-194318          │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ ssh     │ cert-options-194318 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-194318          │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ ssh     │ -p cert-options-194318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194318          │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p cert-options-194318                                                                                                                                                                                                                        │ cert-options-194318          │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-283312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │                     │
	│ stop    │ -p old-k8s-version-283312 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-283312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:54 UTC │
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:55 UTC │
	│ image   │ old-k8s-version-283312 image list --format=json                                                                                                                                                                                               │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ pause   │ -p old-k8s-version-283312 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ delete  │ -p old-k8s-version-283312                                                                                                                                                                                                                     │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ delete  │ -p old-k8s-version-283312                                                                                                                                                                                                                     │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-expiration-322507 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-expiration-322507                                                                                                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-262764 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:56:13
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:56:13.162095 1230335 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:56:13.162651 1230335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:56:13.162686 1230335 out.go:374] Setting ErrFile to fd 2...
	I1123 08:56:13.162705 1230335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:56:13.163024 1230335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:56:13.163491 1230335 out.go:368] Setting JSON to false
	I1123 08:56:13.164541 1230335 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34719,"bootTime":1763853455,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:56:13.164647 1230335 start.go:143] virtualization:  
	I1123 08:56:13.169802 1230335 out.go:179] * [embed-certs-879861] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:56:13.172432 1230335 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:56:13.172512 1230335 notify.go:221] Checking for updates...
	I1123 08:56:13.176432 1230335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:56:13.179716 1230335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:56:13.183357 1230335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:56:13.186701 1230335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:56:13.189732 1230335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:56:13.193145 1230335 config.go:182] Loaded profile config "default-k8s-diff-port-262764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:56:13.193246 1230335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:56:13.241195 1230335 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:56:13.241303 1230335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:56:13.384248 1230335 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:56:13.372718542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:56:13.384349 1230335 docker.go:319] overlay module found
	I1123 08:56:13.387494 1230335 out.go:179] * Using the docker driver based on user configuration
	I1123 08:56:13.390318 1230335 start.go:309] selected driver: docker
	I1123 08:56:13.390342 1230335 start.go:927] validating driver "docker" against <nil>
	I1123 08:56:13.390354 1230335 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:56:13.391019 1230335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:56:13.532170 1230335 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:56:13.522422496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:56:13.532357 1230335 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:56:13.532586 1230335 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:56:13.535923 1230335 out.go:179] * Using Docker driver with root privileges
	I1123 08:56:13.538797 1230335 cni.go:84] Creating CNI manager for ""
	I1123 08:56:13.538869 1230335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:56:13.538884 1230335 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:56:13.538972 1230335 start.go:353] cluster config:
	{Name:embed-certs-879861 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:56:13.542223 1230335 out.go:179] * Starting "embed-certs-879861" primary control-plane node in "embed-certs-879861" cluster
	I1123 08:56:13.545179 1230335 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:56:13.547983 1230335 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:56:13.550882 1230335 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:56:13.550947 1230335 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 08:56:13.550960 1230335 cache.go:65] Caching tarball of preloaded images
	I1123 08:56:13.551047 1230335 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:56:13.551061 1230335 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:56:13.551173 1230335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/config.json ...
	I1123 08:56:13.551207 1230335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/config.json: {Name:mkd2803d9d8d25eb03198a9271fd928a357ee2d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:56:13.551367 1230335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:56:13.590155 1230335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:56:13.590180 1230335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:56:13.590196 1230335 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:56:13.590229 1230335 start.go:360] acquireMachinesLock for embed-certs-879861: {Name:mkc426f5135ca68e4cb995276c3947d42bb1e43d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:56:13.590350 1230335 start.go:364] duration metric: took 92.018µs to acquireMachinesLock for "embed-certs-879861"
	I1123 08:56:13.590382 1230335 start.go:93] Provisioning new machine with config: &{Name:embed-certs-879861 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:56:13.590465 1230335 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:56:13.028621 1226777 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:56:13.028646 1226777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:56:13.028711 1226777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:56:13.043804 1226777 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:56:13.043830 1226777 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:56:13.043897 1226777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:56:13.087129 1226777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34522 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/default-k8s-diff-port-262764/id_rsa Username:docker}
	I1123 08:56:13.100041 1226777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34522 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/default-k8s-diff-port-262764/id_rsa Username:docker}
	I1123 08:56:13.460720 1226777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:56:13.461229 1226777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:56:13.461336 1226777 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:56:13.483937 1226777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:56:13.571256 1226777 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-262764" to be "Ready" ...
	I1123 08:56:14.626934 1226777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.165661815s)
	I1123 08:56:14.626979 1226777 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.16562806s)
	I1123 08:56:14.626989 1226777 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 08:56:14.628099 1226777 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.144132673s)
	I1123 08:56:14.652643 1226777 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:56:14.655467 1226777 addons.go:530] duration metric: took 1.728013566s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:56:15.130733 1226777 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-262764" context rescaled to 1 replicas
	W1123 08:56:15.575027 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	I1123 08:56:13.597926 1230335 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:56:13.598166 1230335 start.go:159] libmachine.API.Create for "embed-certs-879861" (driver="docker")
	I1123 08:56:13.598202 1230335 client.go:173] LocalClient.Create starting
	I1123 08:56:13.598291 1230335 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem
	I1123 08:56:13.598331 1230335 main.go:143] libmachine: Decoding PEM data...
	I1123 08:56:13.598354 1230335 main.go:143] libmachine: Parsing certificate...
	I1123 08:56:13.598415 1230335 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem
	I1123 08:56:13.598437 1230335 main.go:143] libmachine: Decoding PEM data...
	I1123 08:56:13.598453 1230335 main.go:143] libmachine: Parsing certificate...
	I1123 08:56:13.598816 1230335 cli_runner.go:164] Run: docker network inspect embed-certs-879861 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:56:13.622233 1230335 cli_runner.go:211] docker network inspect embed-certs-879861 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:56:13.622309 1230335 network_create.go:284] running [docker network inspect embed-certs-879861] to gather additional debugging logs...
	I1123 08:56:13.622332 1230335 cli_runner.go:164] Run: docker network inspect embed-certs-879861
	W1123 08:56:13.649126 1230335 cli_runner.go:211] docker network inspect embed-certs-879861 returned with exit code 1
	I1123 08:56:13.649163 1230335 network_create.go:287] error running [docker network inspect embed-certs-879861]: docker network inspect embed-certs-879861: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-879861 not found
	I1123 08:56:13.649177 1230335 network_create.go:289] output of [docker network inspect embed-certs-879861]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-879861 not found
	
	** /stderr **
	I1123 08:56:13.649264 1230335 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:56:13.676237 1230335 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-32d396d9f7df IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:9b:29:4a:5c:ab} reservation:<nil>}
	I1123 08:56:13.676583 1230335 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-859c97accd92 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:ea:cf:62:f4:f8} reservation:<nil>}
	I1123 08:56:13.676881 1230335 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-50e966d7b39a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:1d:b6:b9:b9:ef} reservation:<nil>}
	I1123 08:56:13.677308 1230335 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1e450}
	I1123 08:56:13.677332 1230335 network_create.go:124] attempt to create docker network embed-certs-879861 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 08:56:13.677388 1230335 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-879861 embed-certs-879861
	I1123 08:56:13.774368 1230335 network_create.go:108] docker network embed-certs-879861 192.168.76.0/24 created
	I1123 08:56:13.774404 1230335 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-879861" container
	I1123 08:56:13.774493 1230335 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:56:13.800162 1230335 cli_runner.go:164] Run: docker volume create embed-certs-879861 --label name.minikube.sigs.k8s.io=embed-certs-879861 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:56:13.823169 1230335 oci.go:103] Successfully created a docker volume embed-certs-879861
	I1123 08:56:13.823275 1230335 cli_runner.go:164] Run: docker run --rm --name embed-certs-879861-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-879861 --entrypoint /usr/bin/test -v embed-certs-879861:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:56:14.513278 1230335 oci.go:107] Successfully prepared a docker volume embed-certs-879861
	I1123 08:56:14.513358 1230335 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:56:14.513369 1230335 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:56:14.513445 1230335 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-879861:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 08:56:18.074390 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	W1123 08:56:20.074627 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	I1123 08:56:18.944934 1230335 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-879861:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.431432501s)
	I1123 08:56:18.944967 1230335 kic.go:203] duration metric: took 4.431595212s to extract preloaded images to volume ...
	W1123 08:56:18.945103 1230335 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:56:18.945207 1230335 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:56:19.009843 1230335 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-879861 --name embed-certs-879861 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-879861 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-879861 --network embed-certs-879861 --ip 192.168.76.2 --volume embed-certs-879861:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:56:19.304238 1230335 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Running}}
	I1123 08:56:19.328642 1230335 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:56:19.350461 1230335 cli_runner.go:164] Run: docker exec embed-certs-879861 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:56:19.400595 1230335 oci.go:144] the created container "embed-certs-879861" has a running status.
	I1123 08:56:19.400622 1230335 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa...
	I1123 08:56:19.660039 1230335 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:56:19.687502 1230335 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:56:19.708637 1230335 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:56:19.708656 1230335 kic_runner.go:114] Args: [docker exec --privileged embed-certs-879861 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:56:19.758856 1230335 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:56:19.777885 1230335 machine.go:94] provisionDockerMachine start ...
	I1123 08:56:19.777980 1230335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:56:19.806490 1230335 main.go:143] libmachine: Using SSH client type: native
	I1123 08:56:19.806816 1230335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34527 <nil> <nil>}
	I1123 08:56:19.806826 1230335 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:56:19.808715 1230335 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:56:22.958827 1230335 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-879861
	
	I1123 08:56:22.958849 1230335 ubuntu.go:182] provisioning hostname "embed-certs-879861"
	I1123 08:56:22.958912 1230335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:56:22.976180 1230335 main.go:143] libmachine: Using SSH client type: native
	I1123 08:56:22.976515 1230335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34527 <nil> <nil>}
	I1123 08:56:22.976533 1230335 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-879861 && echo "embed-certs-879861" | sudo tee /etc/hostname
	I1123 08:56:23.140915 1230335 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-879861
	
	I1123 08:56:23.141035 1230335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:56:23.159784 1230335 main.go:143] libmachine: Using SSH client type: native
	I1123 08:56:23.160161 1230335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34527 <nil> <nil>}
	I1123 08:56:23.160183 1230335 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-879861' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-879861/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-879861' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:56:23.319357 1230335 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:56:23.319382 1230335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:56:23.319441 1230335 ubuntu.go:190] setting up certificates
	I1123 08:56:23.319450 1230335 provision.go:84] configureAuth start
	I1123 08:56:23.319509 1230335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879861
	I1123 08:56:23.336240 1230335 provision.go:143] copyHostCerts
	I1123 08:56:23.336311 1230335 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:56:23.336324 1230335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:56:23.336406 1230335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:56:23.336507 1230335 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:56:23.336517 1230335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:56:23.336545 1230335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:56:23.336601 1230335 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:56:23.336610 1230335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:56:23.336633 1230335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:56:23.336717 1230335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.embed-certs-879861 san=[127.0.0.1 192.168.76.2 embed-certs-879861 localhost minikube]
	I1123 08:56:23.553550 1230335 provision.go:177] copyRemoteCerts
	I1123 08:56:23.553625 1230335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:56:23.553664 1230335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:56:23.571443 1230335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:56:23.679498 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:56:23.701032 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 08:56:23.719895 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:56:23.737826 1230335 provision.go:87] duration metric: took 418.353228ms to configureAuth
	I1123 08:56:23.737854 1230335 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:56:23.738034 1230335 config.go:182] Loaded profile config "embed-certs-879861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:56:23.738152 1230335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:56:23.754818 1230335 main.go:143] libmachine: Using SSH client type: native
	I1123 08:56:23.755137 1230335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34527 <nil> <nil>}
	I1123 08:56:23.755157 1230335 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:56:24.058694 1230335 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:56:24.058731 1230335 machine.go:97] duration metric: took 4.280813554s to provisionDockerMachine
	I1123 08:56:24.058743 1230335 client.go:176] duration metric: took 10.460529242s to LocalClient.Create
	I1123 08:56:24.058756 1230335 start.go:167] duration metric: took 10.460592026s to libmachine.API.Create "embed-certs-879861"
	I1123 08:56:24.058765 1230335 start.go:293] postStartSetup for "embed-certs-879861" (driver="docker")
	I1123 08:56:24.058775 1230335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:56:24.058844 1230335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:56:24.058895 1230335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:56:24.079259 1230335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:56:24.183088 1230335 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:56:24.186258 1230335 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:56:24.186290 1230335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:56:24.186302 1230335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:56:24.186354 1230335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:56:24.186437 1230335 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:56:24.186545 1230335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:56:24.193685 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:56:24.211323 1230335 start.go:296] duration metric: took 152.542755ms for postStartSetup
	I1123 08:56:24.211692 1230335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879861
	I1123 08:56:24.227848 1230335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/config.json ...
	I1123 08:56:24.228154 1230335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:56:24.228206 1230335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:56:24.245546 1230335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:56:24.347931 1230335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:56:24.352322 1230335 start.go:128] duration metric: took 10.761842795s to createHost
	I1123 08:56:24.352345 1230335 start.go:83] releasing machines lock for "embed-certs-879861", held for 10.761980826s
	I1123 08:56:24.352412 1230335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879861
	I1123 08:56:24.369960 1230335 ssh_runner.go:195] Run: cat /version.json
	I1123 08:56:24.370015 1230335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:56:24.370274 1230335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:56:24.370327 1230335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:56:24.398496 1230335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:56:24.400440 1230335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:56:24.586056 1230335 ssh_runner.go:195] Run: systemctl --version
	I1123 08:56:24.592431 1230335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:56:24.628881 1230335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:56:24.633338 1230335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:56:24.633413 1230335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:56:24.661639 1230335 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:56:24.661673 1230335 start.go:496] detecting cgroup driver to use...
	I1123 08:56:24.661727 1230335 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:56:24.661793 1230335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:56:24.678967 1230335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:56:24.694100 1230335 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:56:24.694173 1230335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:56:24.711727 1230335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:56:24.729987 1230335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:56:24.857842 1230335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:56:24.985707 1230335 docker.go:234] disabling docker service ...
	I1123 08:56:24.985821 1230335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:56:25.013278 1230335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:56:25.028265 1230335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:56:25.153844 1230335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:56:25.276653 1230335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:56:25.289557 1230335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:56:25.302985 1230335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:56:25.303105 1230335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:56:25.311862 1230335 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:56:25.312001 1230335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:56:25.320620 1230335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:56:25.329006 1230335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:56:25.338152 1230335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:56:25.346347 1230335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:56:25.354930 1230335 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:56:25.367885 1230335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:56:25.376566 1230335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:56:25.384239 1230335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:56:25.391351 1230335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:56:25.509597 1230335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:56:25.685691 1230335 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:56:25.685820 1230335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:56:25.692572 1230335 start.go:564] Will wait 60s for crictl version
	I1123 08:56:25.692677 1230335 ssh_runner.go:195] Run: which crictl
	I1123 08:56:25.698234 1230335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:56:25.727040 1230335 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:56:25.727157 1230335 ssh_runner.go:195] Run: crio --version
	I1123 08:56:25.756249 1230335 ssh_runner.go:195] Run: crio --version
	I1123 08:56:25.791072 1230335 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1123 08:56:22.574447 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	W1123 08:56:25.074662 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	I1123 08:56:25.794105 1230335 cli_runner.go:164] Run: docker network inspect embed-certs-879861 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:56:25.812239 1230335 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:56:25.815974 1230335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:56:25.825597 1230335 kubeadm.go:884] updating cluster {Name:embed-certs-879861 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:56:25.825708 1230335 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:56:25.825760 1230335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:56:25.867110 1230335 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:56:25.867138 1230335 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:56:25.867220 1230335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:56:25.892744 1230335 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:56:25.892766 1230335 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:56:25.892775 1230335 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 08:56:25.892858 1230335 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-879861 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:56:25.892946 1230335 ssh_runner.go:195] Run: crio config
	I1123 08:56:25.957620 1230335 cni.go:84] Creating CNI manager for ""
	I1123 08:56:25.957686 1230335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:56:25.957719 1230335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:56:25.957776 1230335 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-879861 NodeName:embed-certs-879861 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:56:25.957937 1230335 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-879861"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:56:25.958047 1230335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:56:25.965631 1230335 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:56:25.965706 1230335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:56:25.973230 1230335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 08:56:25.986336 1230335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:56:25.998810 1230335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1123 08:56:26.014539 1230335 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:56:26.019696 1230335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:56:26.030752 1230335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:56:26.164131 1230335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:56:26.180875 1230335 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861 for IP: 192.168.76.2
	I1123 08:56:26.180947 1230335 certs.go:195] generating shared ca certs ...
	I1123 08:56:26.180976 1230335 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:56:26.181174 1230335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:56:26.181247 1230335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:56:26.181270 1230335 certs.go:257] generating profile certs ...
	I1123 08:56:26.181382 1230335 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/client.key
	I1123 08:56:26.181415 1230335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/client.crt with IP's: []
	I1123 08:56:26.344414 1230335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/client.crt ...
	I1123 08:56:26.344446 1230335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/client.crt: {Name:mk3a5c3d02f99f75ba07c883d331efd889fb0189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:56:26.344683 1230335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/client.key ...
	I1123 08:56:26.344700 1230335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/client.key: {Name:mkb038ed33441e7f6234a4b40bce0a99a6aeb90d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:56:26.344796 1230335 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.key.a22c785f
	I1123 08:56:26.344822 1230335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.crt.a22c785f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 08:56:26.683785 1230335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.crt.a22c785f ...
	I1123 08:56:26.683822 1230335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.crt.a22c785f: {Name:mk0bf00246f84f44a67e54c8db5db606b4318d0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:56:26.684013 1230335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.key.a22c785f ...
	I1123 08:56:26.684029 1230335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.key.a22c785f: {Name:mk5801417c831329fce8a612a9b186805ad41c2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:56:26.684122 1230335 certs.go:382] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.crt.a22c785f -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.crt
	I1123 08:56:26.684202 1230335 certs.go:386] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.key.a22c785f -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.key
	I1123 08:56:26.684262 1230335 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.key
	I1123 08:56:26.684280 1230335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.crt with IP's: []
	I1123 08:56:26.975260 1230335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.crt ...
	I1123 08:56:26.975291 1230335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.crt: {Name:mk4323a3d1e80717693c33f2cb59794fee7c1a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:56:26.975475 1230335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.key ...
	I1123 08:56:26.975492 1230335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.key: {Name:mkf33544b57d5b8b120eda5e0891e3a4968c63cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:56:26.975683 1230335 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:56:26.975728 1230335 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:56:26.975737 1230335 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:56:26.975769 1230335 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:56:26.975800 1230335 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:56:26.975830 1230335 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:56:26.975885 1230335 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:56:26.976501 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:56:26.998394 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:56:27.018743 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:56:27.037470 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:56:27.055311 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:56:27.074158 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:56:27.094217 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:56:27.111970 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:56:27.130080 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:56:27.148202 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 08:56:27.165577 1230335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:56:27.183132 1230335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:56:27.200313 1230335 ssh_runner.go:195] Run: openssl version
	I1123 08:56:27.206558 1230335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:56:27.214925 1230335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:56:27.219007 1230335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:56:27.219077 1230335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:56:27.260223 1230335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:56:27.268546 1230335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:56:27.276879 1230335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:56:27.280639 1230335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:56:27.280709 1230335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:56:27.322822 1230335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:56:27.331467 1230335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:56:27.339894 1230335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:56:27.343987 1230335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:56:27.344061 1230335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:56:27.388842 1230335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:56:27.397505 1230335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:56:27.401312 1230335 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:56:27.401372 1230335 kubeadm.go:401] StartCluster: {Name:embed-certs-879861 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:56:27.401448 1230335 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:56:27.401506 1230335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:56:27.435296 1230335 cri.go:89] found id: ""
	I1123 08:56:27.435364 1230335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:56:27.444331 1230335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:56:27.452034 1230335 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:56:27.452121 1230335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:56:27.459697 1230335 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:56:27.459715 1230335 kubeadm.go:158] found existing configuration files:
	
	I1123 08:56:27.459797 1230335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:56:27.467527 1230335 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:56:27.467639 1230335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:56:27.475009 1230335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:56:27.482777 1230335 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:56:27.482892 1230335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:56:27.490316 1230335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:56:27.497762 1230335 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:56:27.497853 1230335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:56:27.505091 1230335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:56:27.519899 1230335 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:56:27.519970 1230335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:56:27.527416 1230335 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:56:27.576513 1230335 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:56:27.576584 1230335 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:56:27.611696 1230335 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:56:27.611769 1230335 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:56:27.611812 1230335 kubeadm.go:319] OS: Linux
	I1123 08:56:27.611863 1230335 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:56:27.611931 1230335 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:56:27.611982 1230335 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:56:27.612036 1230335 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:56:27.612088 1230335 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:56:27.612140 1230335 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:56:27.612189 1230335 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:56:27.612242 1230335 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:56:27.612292 1230335 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:56:27.696831 1230335 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:56:27.696948 1230335 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:56:27.697047 1230335 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:56:27.706906 1230335 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:56:27.712584 1230335 out.go:252]   - Generating certificates and keys ...
	I1123 08:56:27.712710 1230335 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:56:27.712802 1230335 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	W1123 08:56:27.075263 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	W1123 08:56:29.574691 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	I1123 08:56:28.514039 1230335 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:56:28.795324 1230335 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:56:29.324170 1230335 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:56:29.972832 1230335 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:56:30.994260 1230335 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:56:30.994634 1230335 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-879861 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:56:31.429449 1230335 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:56:31.429922 1230335 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-879861 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:56:31.837061 1230335 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:56:32.403393 1230335 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:56:34.082486 1230335 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:56:34.082781 1230335 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:56:34.427373 1230335 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:56:34.822874 1230335 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:56:35.258590 1230335 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:56:35.887494 1230335 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:56:36.711089 1230335 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:56:36.711742 1230335 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:56:36.714291 1230335 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1123 08:56:32.075701 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	W1123 08:56:34.575483 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	I1123 08:56:36.717700 1230335 out.go:252]   - Booting up control plane ...
	I1123 08:56:36.717806 1230335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:56:36.717884 1230335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:56:36.717951 1230335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:56:36.733864 1230335 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:56:36.733980 1230335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:56:36.742553 1230335 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:56:36.742892 1230335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:56:36.742947 1230335 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:56:36.874133 1230335 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:56:36.874275 1230335 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1123 08:56:37.074808 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	W1123 08:56:39.075827 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	W1123 08:56:41.574238 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	I1123 08:56:38.379564 1230335 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501774411s
	I1123 08:56:38.381900 1230335 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:56:38.382002 1230335 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 08:56:38.382106 1230335 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:56:38.382191 1230335 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:56:42.498192 1230335 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.115533858s
	I1123 08:56:44.360752 1230335 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.978801997s
	I1123 08:56:44.884146 1230335 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501936963s
	I1123 08:56:44.904787 1230335 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:56:44.920272 1230335 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:56:44.935792 1230335 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:56:44.936012 1230335 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-879861 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:56:44.948038 1230335 kubeadm.go:319] [bootstrap-token] Using token: nffanf.2w1r1hpxz1odwbup
	I1123 08:56:44.951304 1230335 out.go:252]   - Configuring RBAC rules ...
	I1123 08:56:44.951426 1230335 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:56:44.955318 1230335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:56:44.963563 1230335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:56:44.969031 1230335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:56:44.975485 1230335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:56:44.979450 1230335 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:56:45.297754 1230335 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:56:45.776591 1230335 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:56:46.291287 1230335 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:56:46.292530 1230335 kubeadm.go:319] 
	I1123 08:56:46.292601 1230335 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:56:46.292606 1230335 kubeadm.go:319] 
	I1123 08:56:46.292683 1230335 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:56:46.292688 1230335 kubeadm.go:319] 
	I1123 08:56:46.292712 1230335 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:56:46.292894 1230335 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:56:46.292948 1230335 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:56:46.292953 1230335 kubeadm.go:319] 
	I1123 08:56:46.293003 1230335 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:56:46.293007 1230335 kubeadm.go:319] 
	I1123 08:56:46.293051 1230335 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:56:46.293055 1230335 kubeadm.go:319] 
	I1123 08:56:46.293108 1230335 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:56:46.293182 1230335 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:56:46.293245 1230335 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:56:46.293249 1230335 kubeadm.go:319] 
	I1123 08:56:46.293328 1230335 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:56:46.293400 1230335 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:56:46.293404 1230335 kubeadm.go:319] 
	I1123 08:56:46.293483 1230335 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nffanf.2w1r1hpxz1odwbup \
	I1123 08:56:46.293591 1230335 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb \
	I1123 08:56:46.293611 1230335 kubeadm.go:319] 	--control-plane 
	I1123 08:56:46.293615 1230335 kubeadm.go:319] 
	I1123 08:56:46.293695 1230335 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:56:46.293698 1230335 kubeadm.go:319] 
	I1123 08:56:46.293775 1230335 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nffanf.2w1r1hpxz1odwbup \
	I1123 08:56:46.293872 1230335 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb 
	I1123 08:56:46.298536 1230335 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 08:56:46.298762 1230335 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:56:46.298867 1230335 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:56:46.298883 1230335 cni.go:84] Creating CNI manager for ""
	I1123 08:56:46.298892 1230335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:56:46.304017 1230335 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1123 08:56:43.575485 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	W1123 08:56:46.074702 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	I1123 08:56:46.306965 1230335 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:56:46.311075 1230335 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:56:46.311094 1230335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:56:46.325951 1230335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:56:46.618936 1230335 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:56:46.619051 1230335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:56:46.619147 1230335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-879861 minikube.k8s.io/updated_at=2025_11_23T08_56_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=embed-certs-879861 minikube.k8s.io/primary=true
	I1123 08:56:46.758103 1230335 ops.go:34] apiserver oom_adj: -16
	I1123 08:56:46.758275 1230335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:56:47.258457 1230335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:56:47.758956 1230335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:56:48.258714 1230335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:56:48.758854 1230335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:56:49.258933 1230335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:56:49.759151 1230335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:56:50.258925 1230335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:56:50.758946 1230335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:56:51.259263 1230335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:56:51.377875 1230335 kubeadm.go:1114] duration metric: took 4.75887664s to wait for elevateKubeSystemPrivileges
	I1123 08:56:51.377906 1230335 kubeadm.go:403] duration metric: took 23.976539829s to StartCluster
	I1123 08:56:51.377922 1230335 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:56:51.377996 1230335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:56:51.379368 1230335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:56:51.379576 1230335 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:56:51.379693 1230335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:56:51.379937 1230335 config.go:182] Loaded profile config "embed-certs-879861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:56:51.379979 1230335 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:56:51.380038 1230335 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-879861"
	I1123 08:56:51.380053 1230335 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-879861"
	I1123 08:56:51.380077 1230335 host.go:66] Checking if "embed-certs-879861" exists ...
	I1123 08:56:51.380574 1230335 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:56:51.381062 1230335 addons.go:70] Setting default-storageclass=true in profile "embed-certs-879861"
	I1123 08:56:51.381084 1230335 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-879861"
	I1123 08:56:51.381380 1230335 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:56:51.383244 1230335 out.go:179] * Verifying Kubernetes components...
	I1123 08:56:51.387060 1230335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:56:51.416246 1230335 addons.go:239] Setting addon default-storageclass=true in "embed-certs-879861"
	I1123 08:56:51.416286 1230335 host.go:66] Checking if "embed-certs-879861" exists ...
	I1123 08:56:51.416706 1230335 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:56:51.429752 1230335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1123 08:56:48.574920 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	W1123 08:56:51.074531 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	I1123 08:56:51.432638 1230335 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:56:51.432659 1230335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:56:51.432718 1230335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:56:51.447046 1230335 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:56:51.447065 1230335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:56:51.447129 1230335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:56:51.477783 1230335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:56:51.488990 1230335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34527 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:56:51.760090 1230335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:56:51.760284 1230335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:56:51.788820 1230335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:56:51.810385 1230335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:56:52.330305 1230335 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 08:56:52.333919 1230335 node_ready.go:35] waiting up to 6m0s for node "embed-certs-879861" to be "Ready" ...
	I1123 08:56:52.553873 1230335 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 08:56:52.556866 1230335 addons.go:530] duration metric: took 1.176876448s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 08:56:52.834935 1230335 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-879861" context rescaled to 1 replicas
	W1123 08:56:53.574705 1226777 node_ready.go:57] node "default-k8s-diff-port-262764" has "Ready":"False" status (will retry)
	I1123 08:56:54.080527 1226777 node_ready.go:49] node "default-k8s-diff-port-262764" is "Ready"
	I1123 08:56:54.080560 1226777 node_ready.go:38] duration metric: took 40.509268787s for node "default-k8s-diff-port-262764" to be "Ready" ...
	I1123 08:56:54.080574 1226777 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:56:54.080636 1226777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:56:54.094239 1226777 api_server.go:72] duration metric: took 41.167332986s to wait for apiserver process to appear ...
	I1123 08:56:54.094265 1226777 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:56:54.094287 1226777 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:56:54.102244 1226777 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:56:54.103368 1226777 api_server.go:141] control plane version: v1.34.1
	I1123 08:56:54.103397 1226777 api_server.go:131] duration metric: took 9.125498ms to wait for apiserver health ...
	I1123 08:56:54.103407 1226777 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:56:54.106958 1226777 system_pods.go:59] 8 kube-system pods found
	I1123 08:56:54.106998 1226777 system_pods.go:61] "coredns-66bc5c9577-mmrrf" [6e362045-d2e1-48cc-9e1d-c6b0dfa33477] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:56:54.107005 1226777 system_pods.go:61] "etcd-default-k8s-diff-port-262764" [4021e039-a3e2-4640-b525-61ca05c4f826] Running
	I1123 08:56:54.107011 1226777 system_pods.go:61] "kindnet-xsm2q" [ddc99fbd-3077-4564-9af7-f3d3cb84526a] Running
	I1123 08:56:54.107016 1226777 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-262764" [0ccc6c56-33cc-434f-b7af-28ea71874781] Running
	I1123 08:56:54.107021 1226777 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-262764" [32313b16-19a7-4780-b720-a4fbfede7d6c] Running
	I1123 08:56:54.107030 1226777 system_pods.go:61] "kube-proxy-9thkr" [2cfa1824-511c-4e8b-8bc1-551bda9a3767] Running
	I1123 08:56:54.107035 1226777 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-262764" [4ad18014-f007-4491-a54a-991aedaddbef] Running
	I1123 08:56:54.107044 1226777 system_pods.go:61] "storage-provisioner" [b14064f1-d4ac-44c3-8eff-4854e3c5615e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:56:54.107056 1226777 system_pods.go:74] duration metric: took 3.643984ms to wait for pod list to return data ...
	I1123 08:56:54.107065 1226777 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:56:54.113766 1226777 default_sa.go:45] found service account: "default"
	I1123 08:56:54.113790 1226777 default_sa.go:55] duration metric: took 6.716835ms for default service account to be created ...
	I1123 08:56:54.113800 1226777 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:56:54.116671 1226777 system_pods.go:86] 8 kube-system pods found
	I1123 08:56:54.116746 1226777 system_pods.go:89] "coredns-66bc5c9577-mmrrf" [6e362045-d2e1-48cc-9e1d-c6b0dfa33477] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:56:54.116769 1226777 system_pods.go:89] "etcd-default-k8s-diff-port-262764" [4021e039-a3e2-4640-b525-61ca05c4f826] Running
	I1123 08:56:54.116792 1226777 system_pods.go:89] "kindnet-xsm2q" [ddc99fbd-3077-4564-9af7-f3d3cb84526a] Running
	I1123 08:56:54.116830 1226777 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-262764" [0ccc6c56-33cc-434f-b7af-28ea71874781] Running
	I1123 08:56:54.116849 1226777 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-262764" [32313b16-19a7-4780-b720-a4fbfede7d6c] Running
	I1123 08:56:54.116869 1226777 system_pods.go:89] "kube-proxy-9thkr" [2cfa1824-511c-4e8b-8bc1-551bda9a3767] Running
	I1123 08:56:54.116888 1226777 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-262764" [4ad18014-f007-4491-a54a-991aedaddbef] Running
	I1123 08:56:54.116929 1226777 system_pods.go:89] "storage-provisioner" [b14064f1-d4ac-44c3-8eff-4854e3c5615e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:56:54.116977 1226777 retry.go:31] will retry after 305.282172ms: missing components: kube-dns
	I1123 08:56:54.428003 1226777 system_pods.go:86] 8 kube-system pods found
	I1123 08:56:54.428041 1226777 system_pods.go:89] "coredns-66bc5c9577-mmrrf" [6e362045-d2e1-48cc-9e1d-c6b0dfa33477] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:56:54.428048 1226777 system_pods.go:89] "etcd-default-k8s-diff-port-262764" [4021e039-a3e2-4640-b525-61ca05c4f826] Running
	I1123 08:56:54.428054 1226777 system_pods.go:89] "kindnet-xsm2q" [ddc99fbd-3077-4564-9af7-f3d3cb84526a] Running
	I1123 08:56:54.428059 1226777 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-262764" [0ccc6c56-33cc-434f-b7af-28ea71874781] Running
	I1123 08:56:54.428066 1226777 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-262764" [32313b16-19a7-4780-b720-a4fbfede7d6c] Running
	I1123 08:56:54.428071 1226777 system_pods.go:89] "kube-proxy-9thkr" [2cfa1824-511c-4e8b-8bc1-551bda9a3767] Running
	I1123 08:56:54.428075 1226777 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-262764" [4ad18014-f007-4491-a54a-991aedaddbef] Running
	I1123 08:56:54.428081 1226777 system_pods.go:89] "storage-provisioner" [b14064f1-d4ac-44c3-8eff-4854e3c5615e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:56:54.428096 1226777 retry.go:31] will retry after 252.021416ms: missing components: kube-dns
	I1123 08:56:54.684007 1226777 system_pods.go:86] 8 kube-system pods found
	I1123 08:56:54.684041 1226777 system_pods.go:89] "coredns-66bc5c9577-mmrrf" [6e362045-d2e1-48cc-9e1d-c6b0dfa33477] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:56:54.684049 1226777 system_pods.go:89] "etcd-default-k8s-diff-port-262764" [4021e039-a3e2-4640-b525-61ca05c4f826] Running
	I1123 08:56:54.684055 1226777 system_pods.go:89] "kindnet-xsm2q" [ddc99fbd-3077-4564-9af7-f3d3cb84526a] Running
	I1123 08:56:54.684060 1226777 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-262764" [0ccc6c56-33cc-434f-b7af-28ea71874781] Running
	I1123 08:56:54.684066 1226777 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-262764" [32313b16-19a7-4780-b720-a4fbfede7d6c] Running
	I1123 08:56:54.684070 1226777 system_pods.go:89] "kube-proxy-9thkr" [2cfa1824-511c-4e8b-8bc1-551bda9a3767] Running
	I1123 08:56:54.684075 1226777 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-262764" [4ad18014-f007-4491-a54a-991aedaddbef] Running
	I1123 08:56:54.684086 1226777 system_pods.go:89] "storage-provisioner" [b14064f1-d4ac-44c3-8eff-4854e3c5615e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:56:54.684100 1226777 retry.go:31] will retry after 393.020312ms: missing components: kube-dns
	I1123 08:56:55.080959 1226777 system_pods.go:86] 8 kube-system pods found
	I1123 08:56:55.080994 1226777 system_pods.go:89] "coredns-66bc5c9577-mmrrf" [6e362045-d2e1-48cc-9e1d-c6b0dfa33477] Running
	I1123 08:56:55.081003 1226777 system_pods.go:89] "etcd-default-k8s-diff-port-262764" [4021e039-a3e2-4640-b525-61ca05c4f826] Running
	I1123 08:56:55.081009 1226777 system_pods.go:89] "kindnet-xsm2q" [ddc99fbd-3077-4564-9af7-f3d3cb84526a] Running
	I1123 08:56:55.081014 1226777 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-262764" [0ccc6c56-33cc-434f-b7af-28ea71874781] Running
	I1123 08:56:55.081018 1226777 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-262764" [32313b16-19a7-4780-b720-a4fbfede7d6c] Running
	I1123 08:56:55.081022 1226777 system_pods.go:89] "kube-proxy-9thkr" [2cfa1824-511c-4e8b-8bc1-551bda9a3767] Running
	I1123 08:56:55.081027 1226777 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-262764" [4ad18014-f007-4491-a54a-991aedaddbef] Running
	I1123 08:56:55.081034 1226777 system_pods.go:89] "storage-provisioner" [b14064f1-d4ac-44c3-8eff-4854e3c5615e] Running
	I1123 08:56:55.081041 1226777 system_pods.go:126] duration metric: took 967.236126ms to wait for k8s-apps to be running ...
	I1123 08:56:55.081059 1226777 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:56:55.081119 1226777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:56:55.095914 1226777 system_svc.go:56] duration metric: took 14.838545ms WaitForService to wait for kubelet
	I1123 08:56:55.095942 1226777 kubeadm.go:587] duration metric: took 42.169041023s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:56:55.095962 1226777 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:56:55.099160 1226777 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:56:55.099228 1226777 node_conditions.go:123] node cpu capacity is 2
	I1123 08:56:55.099243 1226777 node_conditions.go:105] duration metric: took 3.275946ms to run NodePressure ...
	I1123 08:56:55.099256 1226777 start.go:242] waiting for startup goroutines ...
	I1123 08:56:55.099265 1226777 start.go:247] waiting for cluster config update ...
	I1123 08:56:55.099281 1226777 start.go:256] writing updated cluster config ...
	I1123 08:56:55.099580 1226777 ssh_runner.go:195] Run: rm -f paused
	I1123 08:56:55.103261 1226777 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:56:55.107702 1226777 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mmrrf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:56:55.112718 1226777 pod_ready.go:94] pod "coredns-66bc5c9577-mmrrf" is "Ready"
	I1123 08:56:55.112746 1226777 pod_ready.go:86] duration metric: took 5.014482ms for pod "coredns-66bc5c9577-mmrrf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:56:55.115290 1226777 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:56:55.120523 1226777 pod_ready.go:94] pod "etcd-default-k8s-diff-port-262764" is "Ready"
	I1123 08:56:55.120553 1226777 pod_ready.go:86] duration metric: took 5.237663ms for pod "etcd-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:56:55.123046 1226777 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:56:55.127388 1226777 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-262764" is "Ready"
	I1123 08:56:55.127414 1226777 pod_ready.go:86] duration metric: took 4.345389ms for pod "kube-apiserver-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:56:55.129811 1226777 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:56:55.508094 1226777 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-262764" is "Ready"
	I1123 08:56:55.508171 1226777 pod_ready.go:86] duration metric: took 378.334788ms for pod "kube-controller-manager-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:56:55.708566 1226777 pod_ready.go:83] waiting for pod "kube-proxy-9thkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:56:56.109917 1226777 pod_ready.go:94] pod "kube-proxy-9thkr" is "Ready"
	I1123 08:56:56.109970 1226777 pod_ready.go:86] duration metric: took 401.361355ms for pod "kube-proxy-9thkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:56:56.308475 1226777 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:56:56.708761 1226777 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-262764" is "Ready"
	I1123 08:56:56.708791 1226777 pod_ready.go:86] duration metric: took 400.288679ms for pod "kube-scheduler-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:56:56.708805 1226777 pod_ready.go:40] duration metric: took 1.605458614s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:56:56.759535 1226777 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:56:56.764856 1226777 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-262764" cluster and "default" namespace by default
	W1123 08:56:54.336652 1230335 node_ready.go:57] node "embed-certs-879861" has "Ready":"False" status (will retry)
	W1123 08:56:56.837089 1230335 node_ready.go:57] node "embed-certs-879861" has "Ready":"False" status (will retry)
	W1123 08:56:58.837539 1230335 node_ready.go:57] node "embed-certs-879861" has "Ready":"False" status (will retry)
	W1123 08:57:01.337407 1230335 node_ready.go:57] node "embed-certs-879861" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 23 08:56:54 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:54.458164282Z" level=info msg="Created container 267fdcee06a72fbcd6eaaf9452239a3a9ff77469e48fe5ce604246cbda2cc221: kube-system/coredns-66bc5c9577-mmrrf/coredns" id=82bccc63-6bb4-45b4-bb83-354d2ec75b97 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:56:54 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:54.459329403Z" level=info msg="Starting container: 267fdcee06a72fbcd6eaaf9452239a3a9ff77469e48fe5ce604246cbda2cc221" id=1281de61-82e7-4e13-9490-5980ba7f55a0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:56:54 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:54.463722872Z" level=info msg="Started container" PID=1751 containerID=267fdcee06a72fbcd6eaaf9452239a3a9ff77469e48fe5ce604246cbda2cc221 description=kube-system/coredns-66bc5c9577-mmrrf/coredns id=1281de61-82e7-4e13-9490-5980ba7f55a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3bfadca457a77d64c2130ff2743013285d9382896ca5731924449da4d3e7a89d
	Nov 23 08:56:57 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:57.30823203Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4877e634-0af7-4ce8-aea4-89ef927da23c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:56:57 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:57.308689412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:56:57 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:57.320483094Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:45871d24efde0634888b34e00c328aa69f1f8e177247d4ad7df811c71b00a7c6 UID:5e87a35a-9a78-4158-8a26-e6618c72aa86 NetNS:/var/run/netns/da83cee6-46f8-497e-accf-4ed29d6ec65a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079358}] Aliases:map[]}"
	Nov 23 08:56:57 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:57.32053988Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 08:56:57 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:57.330370089Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:45871d24efde0634888b34e00c328aa69f1f8e177247d4ad7df811c71b00a7c6 UID:5e87a35a-9a78-4158-8a26-e6618c72aa86 NetNS:/var/run/netns/da83cee6-46f8-497e-accf-4ed29d6ec65a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079358}] Aliases:map[]}"
	Nov 23 08:56:57 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:57.330556775Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 08:56:57 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:57.333271161Z" level=info msg="Ran pod sandbox 45871d24efde0634888b34e00c328aa69f1f8e177247d4ad7df811c71b00a7c6 with infra container: default/busybox/POD" id=4877e634-0af7-4ce8-aea4-89ef927da23c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:56:57 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:57.33782927Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1f0cdc33-4e19-4439-a691-79731f1a9b28 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:56:57 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:57.337956405Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1f0cdc33-4e19-4439-a691-79731f1a9b28 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:56:57 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:57.338005905Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1f0cdc33-4e19-4439-a691-79731f1a9b28 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:56:57 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:57.339066093Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=198f71b5-6e11-4f32-a5af-a5ec29d3846d name=/runtime.v1.ImageService/PullImage
	Nov 23 08:56:57 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:57.342065222Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:56:59 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:59.437325852Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=198f71b5-6e11-4f32-a5af-a5ec29d3846d name=/runtime.v1.ImageService/PullImage
	Nov 23 08:56:59 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:59.43818068Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a3751086-0b21-43e2-9891-3965da296b68 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:56:59 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:59.439430023Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=37141e4b-8f2c-4eb7-b932-39926be0e88d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:56:59 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:59.446921171Z" level=info msg="Creating container: default/busybox/busybox" id=d48439c5-92d2-4406-9d56-c1c35cdf1ec6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:56:59 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:59.447049175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:56:59 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:59.452464655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:56:59 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:59.452908631Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:56:59 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:59.46900822Z" level=info msg="Created container 9d141d697e815df459923e4dfaaba5b7a014305eaea2e47aca340d0a9a383c40: default/busybox/busybox" id=d48439c5-92d2-4406-9d56-c1c35cdf1ec6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:56:59 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:59.470117399Z" level=info msg="Starting container: 9d141d697e815df459923e4dfaaba5b7a014305eaea2e47aca340d0a9a383c40" id=e2f0cd4f-713c-4b7c-8702-06fcab6b978e name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:56:59 default-k8s-diff-port-262764 crio[837]: time="2025-11-23T08:56:59.471882995Z" level=info msg="Started container" PID=1806 containerID=9d141d697e815df459923e4dfaaba5b7a014305eaea2e47aca340d0a9a383c40 description=default/busybox/busybox id=e2f0cd4f-713c-4b7c-8702-06fcab6b978e name=/runtime.v1.RuntimeService/StartContainer sandboxID=45871d24efde0634888b34e00c328aa69f1f8e177247d4ad7df811c71b00a7c6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	9d141d697e815       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   45871d24efde0       busybox                                                default
	267fdcee06a72       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   3bfadca457a77       coredns-66bc5c9577-mmrrf                               kube-system
	48aca4d8d4224       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   9118aa209556c       storage-provisioner                                    kube-system
	3ba1e7e411d20       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   6b68995618a5d       kindnet-xsm2q                                          kube-system
	4025ed831b787       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   a3fbd0ea0dc57       kube-proxy-9thkr                                       kube-system
	2c4efca137724       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   9719f36f233e6       kube-controller-manager-default-k8s-diff-port-262764   kube-system
	583f13d69ece9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   94f19d5b96f9f       etcd-default-k8s-diff-port-262764                      kube-system
	a9179fa113431       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   c1fa9bd7903b9       kube-scheduler-default-k8s-diff-port-262764            kube-system
	f7134927f3db1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   a0b3cfee38073       kube-apiserver-default-k8s-diff-port-262764            kube-system
	
	
	==> coredns [267fdcee06a72fbcd6eaaf9452239a3a9ff77469e48fe5ce604246cbda2cc221] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51286 - 23365 "HINFO IN 8369061322078648479.1113377613083112606. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012659193s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-262764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-262764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=default-k8s-diff-port-262764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_56_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:56:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-262764
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:56:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:56:53 +0000   Sun, 23 Nov 2025 08:55:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:56:53 +0000   Sun, 23 Nov 2025 08:55:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:56:53 +0000   Sun, 23 Nov 2025 08:55:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:56:53 +0000   Sun, 23 Nov 2025 08:56:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-262764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                9167756b-ee2d-4d27-ae18-a988612654cb
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-mmrrf                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-262764                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-xsm2q                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-262764             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-262764    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-9thkr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-262764             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-262764 event: Registered Node default-k8s-diff-port-262764 in Controller
	  Normal   NodeReady                14s                kubelet          Node default-k8s-diff-port-262764 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 08:33] overlayfs: idmapped layers are currently not supported
	[Nov23 08:34] overlayfs: idmapped layers are currently not supported
	[Nov23 08:35] overlayfs: idmapped layers are currently not supported
	[Nov23 08:36] overlayfs: idmapped layers are currently not supported
	[Nov23 08:37] overlayfs: idmapped layers are currently not supported
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	[Nov23 08:55] overlayfs: idmapped layers are currently not supported
	[Nov23 08:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [583f13d69ece9543c1651ac1a024eeb05286d7cd57ef8cc244060375edc387c9] <==
	{"level":"warn","ts":"2025-11-23T08:56:02.095462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.141775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.165096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.196529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.222706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.255861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.276524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.307801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.325081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.370790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.390134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.425562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.451802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.478647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.517850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.542640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.586860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.618096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.649297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.683081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.736225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.758698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.800055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:02.834664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:03.025656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55232","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:57:07 up  9:39,  0 user,  load average: 2.83, 3.08, 2.63
	Linux default-k8s-diff-port-262764 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3ba1e7e411d20dcf283e94de06101ea07bcb64838b8ad072deeb4d756b1e5e43] <==
	I1123 08:56:13.328326       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:56:13.331178       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:56:13.331357       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:56:13.331375       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:56:13.331387       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:56:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:56:13.457035       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:56:13.457052       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:56:13.457061       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:56:13.528657       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:56:43.457709       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:56:43.457924       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:56:43.529496       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:56:43.529680       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1123 08:56:45.058017       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:56:45.058064       1 metrics.go:72] Registering metrics
	I1123 08:56:45.058167       1 controller.go:711] "Syncing nftables rules"
	I1123 08:56:53.461530       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:56:53.461586       1 main.go:301] handling current node
	I1123 08:57:03.459285       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:57:03.459447       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f7134927f3db15ec2b68dbfe2751767ba4e2a94e9863c62252738f11c64647c2] <==
	I1123 08:56:04.586590       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 08:56:04.586666       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 08:56:04.603649       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:56:04.609867       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:56:04.607941       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:56:04.609906       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:56:04.642092       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:56:04.643754       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:56:05.069263       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:56:05.083841       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:56:05.083966       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:56:06.114985       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:56:06.197085       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:56:06.279724       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:56:06.294475       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:56:06.295928       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:56:06.301455       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:56:06.513990       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:56:07.389309       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:56:07.431895       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:56:07.453984       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:56:11.766477       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:56:11.772989       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:56:12.165381       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:56:12.510857       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [2c4efca1377248c8e164891ee59466660b2122cbe93ae04c897e5a6d6f9c4d87] <==
	I1123 08:56:11.505182       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:56:11.505433       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:56:11.505461       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:56:11.505704       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:56:11.505771       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-262764"
	I1123 08:56:11.505821       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:56:11.506100       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:56:11.508189       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:56:11.509701       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:56:11.511304       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:56:11.511523       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:56:11.520924       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:56:11.524203       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:56:11.525348       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:56:11.529542       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:56:11.531749       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:56:11.532871       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:56:11.532988       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:56:11.544244       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:56:11.544351       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:56:11.553705       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:56:11.553808       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:56:11.553840       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:56:11.554895       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:56:56.513530       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4025ed831b7877b9347769828801092cfa7e4432480625174a1b8e008deec30d] <==
	I1123 08:56:13.296995       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:56:13.416769       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:56:13.517567       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:56:13.517628       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:56:13.517737       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:56:13.669759       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:56:13.669807       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:56:13.681647       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:56:13.681990       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:56:13.682002       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:56:13.685512       1 config.go:200] "Starting service config controller"
	I1123 08:56:13.685627       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:56:13.685720       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:56:13.685784       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:56:13.685881       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:56:13.685944       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:56:13.717149       1 config.go:309] "Starting node config controller"
	I1123 08:56:13.717170       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:56:13.717190       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:56:13.787851       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:56:13.791733       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:56:13.791920       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a9179fa11343179e6de2690204a116cc66bc29ce2df0f69f1057f1166bbe8ab9] <==
	E1123 08:56:04.567560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:56:04.567618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:56:04.567682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:56:04.567720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:56:04.567765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:56:04.567806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:56:04.567891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:56:04.567969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:56:04.568016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:56:04.568056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:56:04.568125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:56:04.575654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 08:56:04.575885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:56:04.575961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:56:04.576085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:56:04.594460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:56:05.423980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:56:05.481349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:56:05.534658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:56:05.568119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:56:05.622684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:56:05.627453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:56:05.691914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:56:05.696727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 08:56:07.807631       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:56:08 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:08.874074    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-262764" podStartSLOduration=0.874064282 podStartE2EDuration="874.064282ms" podCreationTimestamp="2025-11-23 08:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:56:08.857789951 +0000 UTC m=+1.519295748" watchObservedRunningTime="2025-11-23 08:56:08.874064282 +0000 UTC m=+1.535570087"
	Nov 23 08:56:08 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:08.916094    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-262764" podStartSLOduration=2.916073957 podStartE2EDuration="2.916073957s" podCreationTimestamp="2025-11-23 08:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:56:08.891270615 +0000 UTC m=+1.552776437" watchObservedRunningTime="2025-11-23 08:56:08.916073957 +0000 UTC m=+1.577579762"
	Nov 23 08:56:08 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:08.916422    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-262764" podStartSLOduration=0.916413221 podStartE2EDuration="916.413221ms" podCreationTimestamp="2025-11-23 08:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:56:08.91061627 +0000 UTC m=+1.572122092" watchObservedRunningTime="2025-11-23 08:56:08.916413221 +0000 UTC m=+1.577919026"
	Nov 23 08:56:11 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:11.539630    1313 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:56:11 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:11.540239    1313 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:56:12 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:12.669719    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cfa1824-511c-4e8b-8bc1-551bda9a3767-xtables-lock\") pod \"kube-proxy-9thkr\" (UID: \"2cfa1824-511c-4e8b-8bc1-551bda9a3767\") " pod="kube-system/kube-proxy-9thkr"
	Nov 23 08:56:12 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:12.669808    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2xfp\" (UniqueName: \"kubernetes.io/projected/2cfa1824-511c-4e8b-8bc1-551bda9a3767-kube-api-access-x2xfp\") pod \"kube-proxy-9thkr\" (UID: \"2cfa1824-511c-4e8b-8bc1-551bda9a3767\") " pod="kube-system/kube-proxy-9thkr"
	Nov 23 08:56:12 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:12.669843    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ddc99fbd-3077-4564-9af7-f3d3cb84526a-cni-cfg\") pod \"kindnet-xsm2q\" (UID: \"ddc99fbd-3077-4564-9af7-f3d3cb84526a\") " pod="kube-system/kindnet-xsm2q"
	Nov 23 08:56:12 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:12.669861    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddc99fbd-3077-4564-9af7-f3d3cb84526a-xtables-lock\") pod \"kindnet-xsm2q\" (UID: \"ddc99fbd-3077-4564-9af7-f3d3cb84526a\") " pod="kube-system/kindnet-xsm2q"
	Nov 23 08:56:12 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:12.669895    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddc99fbd-3077-4564-9af7-f3d3cb84526a-lib-modules\") pod \"kindnet-xsm2q\" (UID: \"ddc99fbd-3077-4564-9af7-f3d3cb84526a\") " pod="kube-system/kindnet-xsm2q"
	Nov 23 08:56:12 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:12.669914    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2cfa1824-511c-4e8b-8bc1-551bda9a3767-kube-proxy\") pod \"kube-proxy-9thkr\" (UID: \"2cfa1824-511c-4e8b-8bc1-551bda9a3767\") " pod="kube-system/kube-proxy-9thkr"
	Nov 23 08:56:12 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:12.669929    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cfa1824-511c-4e8b-8bc1-551bda9a3767-lib-modules\") pod \"kube-proxy-9thkr\" (UID: \"2cfa1824-511c-4e8b-8bc1-551bda9a3767\") " pod="kube-system/kube-proxy-9thkr"
	Nov 23 08:56:12 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:12.670042    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w8z2\" (UniqueName: \"kubernetes.io/projected/ddc99fbd-3077-4564-9af7-f3d3cb84526a-kube-api-access-5w8z2\") pod \"kindnet-xsm2q\" (UID: \"ddc99fbd-3077-4564-9af7-f3d3cb84526a\") " pod="kube-system/kindnet-xsm2q"
	Nov 23 08:56:12 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:12.811042    1313 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 08:56:13 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:13.841985    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-xsm2q" podStartSLOduration=1.841968799 podStartE2EDuration="1.841968799s" podCreationTimestamp="2025-11-23 08:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:56:13.841709869 +0000 UTC m=+6.503215682" watchObservedRunningTime="2025-11-23 08:56:13.841968799 +0000 UTC m=+6.503474604"
	Nov 23 08:56:13 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:13.898920    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9thkr" podStartSLOduration=1.8989025160000002 podStartE2EDuration="1.898902516s" podCreationTimestamp="2025-11-23 08:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:56:13.898634478 +0000 UTC m=+6.560140275" watchObservedRunningTime="2025-11-23 08:56:13.898902516 +0000 UTC m=+6.560408321"
	Nov 23 08:56:53 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:53.998931    1313 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:56:54 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:54.091467    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7t58\" (UniqueName: \"kubernetes.io/projected/b14064f1-d4ac-44c3-8eff-4854e3c5615e-kube-api-access-k7t58\") pod \"storage-provisioner\" (UID: \"b14064f1-d4ac-44c3-8eff-4854e3c5615e\") " pod="kube-system/storage-provisioner"
	Nov 23 08:56:54 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:54.091524    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e362045-d2e1-48cc-9e1d-c6b0dfa33477-config-volume\") pod \"coredns-66bc5c9577-mmrrf\" (UID: \"6e362045-d2e1-48cc-9e1d-c6b0dfa33477\") " pod="kube-system/coredns-66bc5c9577-mmrrf"
	Nov 23 08:56:54 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:54.091549    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45546\" (UniqueName: \"kubernetes.io/projected/6e362045-d2e1-48cc-9e1d-c6b0dfa33477-kube-api-access-45546\") pod \"coredns-66bc5c9577-mmrrf\" (UID: \"6e362045-d2e1-48cc-9e1d-c6b0dfa33477\") " pod="kube-system/coredns-66bc5c9577-mmrrf"
	Nov 23 08:56:54 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:54.091587    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b14064f1-d4ac-44c3-8eff-4854e3c5615e-tmp\") pod \"storage-provisioner\" (UID: \"b14064f1-d4ac-44c3-8eff-4854e3c5615e\") " pod="kube-system/storage-provisioner"
	Nov 23 08:56:54 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:54.965501    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mmrrf" podStartSLOduration=42.965481518 podStartE2EDuration="42.965481518s" podCreationTimestamp="2025-11-23 08:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:56:54.948467772 +0000 UTC m=+47.609973569" watchObservedRunningTime="2025-11-23 08:56:54.965481518 +0000 UTC m=+47.626987315"
	Nov 23 08:56:54 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:54.984242    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.984221969000004 podStartE2EDuration="40.984221969s" podCreationTimestamp="2025-11-23 08:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:56:54.966163165 +0000 UTC m=+47.627668970" watchObservedRunningTime="2025-11-23 08:56:54.984221969 +0000 UTC m=+47.645727766"
	Nov 23 08:56:57 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:57.110856    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbzcf\" (UniqueName: \"kubernetes.io/projected/5e87a35a-9a78-4158-8a26-e6618c72aa86-kube-api-access-bbzcf\") pod \"busybox\" (UID: \"5e87a35a-9a78-4158-8a26-e6618c72aa86\") " pod="default/busybox"
	Nov 23 08:56:59 default-k8s-diff-port-262764 kubelet[1313]: I1123 08:56:59.962041    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.861402564 podStartE2EDuration="3.962023012s" podCreationTimestamp="2025-11-23 08:56:56 +0000 UTC" firstStartedPulling="2025-11-23 08:56:57.338316953 +0000 UTC m=+49.999822750" lastFinishedPulling="2025-11-23 08:56:59.438937393 +0000 UTC m=+52.100443198" observedRunningTime="2025-11-23 08:56:59.960981179 +0000 UTC m=+52.622486984" watchObservedRunningTime="2025-11-23 08:56:59.962023012 +0000 UTC m=+52.623528826"
	
	
	==> storage-provisioner [48aca4d8d42241c902b9218b7f678c7c31619d11653e794b8ffe619a6446daa1] <==
	I1123 08:56:54.445051       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:56:54.468397       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:56:54.468451       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:56:54.471631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:56:54.480896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:56:54.481168       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:56:54.482465       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-262764_905d1cf2-233f-4f58-a365-67708094fd14!
	I1123 08:56:54.491352       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"664f8c79-8b37-4f2b-932e-885c1705fac8", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-262764_905d1cf2-233f-4f58-a365-67708094fd14 became leader
	W1123 08:56:54.493372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:56:54.503135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:56:54.583305       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-262764_905d1cf2-233f-4f58-a365-67708094fd14!
	W1123 08:56:56.506398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:56:56.512511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:56:58.515550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:56:58.520004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:00.523464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:00.527886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:02.531145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:02.538431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:04.541981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:04.546437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:06.549585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:06.557250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-262764 -n default-k8s-diff-port-262764
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-262764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-879861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-879861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (457.051265ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:57:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-879861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-879861 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-879861 describe deploy/metrics-server -n kube-system: exit status 1 (117.487992ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-879861 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-879861
helpers_test.go:243: (dbg) docker inspect embed-certs-879861:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5",
	        "Created": "2025-11-23T08:56:19.024991587Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1230989,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:56:19.09070165Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/hosts",
	        "LogPath": "/var/lib/docker/containers/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5-json.log",
	        "Name": "/embed-certs-879861",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-879861:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-879861",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5",
	                "LowerDir": "/var/lib/docker/overlay2/a3ebc4c752dd4d002b5943db6e5cfab20a769c34737858969bb4d642f4ef53ce-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3ebc4c752dd4d002b5943db6e5cfab20a769c34737858969bb4d642f4ef53ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3ebc4c752dd4d002b5943db6e5cfab20a769c34737858969bb4d642f4ef53ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3ebc4c752dd4d002b5943db6e5cfab20a769c34737858969bb4d642f4ef53ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-879861",
	                "Source": "/var/lib/docker/volumes/embed-certs-879861/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-879861",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-879861",
	                "name.minikube.sigs.k8s.io": "embed-certs-879861",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d4efa08bf040ba3fe585a16a4f034fe4e10e1c75b567905ee62f6597dc390771",
	            "SandboxKey": "/var/run/docker/netns/d4efa08bf040",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34527"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34528"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34531"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34529"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34530"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-879861": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:c9:d4:10:3d:17",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "74cdfb3f8ce6a2d207916e4d31bc2aa3571f99fa42bfb2db8c6fa76bac60c37f",
	                    "EndpointID": "c033796a089a93594c3209e0f900f1caf6bd8983b5278fee2f2019b5feb2662f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-879861",
	                        "0b83e5e6966d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879861 -n embed-certs-879861
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-879861 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-879861 logs -n 25: (1.652810626s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p kubernetes-upgrade-354226                                                                                                                                                                                                                  │ kubernetes-upgrade-354226    │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p cert-expiration-322507 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p force-systemd-env-498438                                                                                                                                                                                                                   │ force-systemd-env-498438     │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p cert-options-194318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-194318          │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ ssh     │ cert-options-194318 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-194318          │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ ssh     │ -p cert-options-194318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194318          │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p cert-options-194318                                                                                                                                                                                                                        │ cert-options-194318          │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-283312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │                     │
	│ stop    │ -p old-k8s-version-283312 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-283312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:54 UTC │
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:55 UTC │
	│ image   │ old-k8s-version-283312 image list --format=json                                                                                                                                                                                               │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ pause   │ -p old-k8s-version-283312 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ delete  │ -p old-k8s-version-283312                                                                                                                                                                                                                     │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ delete  │ -p old-k8s-version-283312                                                                                                                                                                                                                     │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-expiration-322507 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-expiration-322507                                                                                                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-262764 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-262764 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-262764 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-879861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:57:20
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:57:20.856354 1233920 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:57:20.856557 1233920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:57:20.856583 1233920 out.go:374] Setting ErrFile to fd 2...
	I1123 08:57:20.856603 1233920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:57:20.856980 1233920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:57:20.857454 1233920 out.go:368] Setting JSON to false
	I1123 08:57:20.858631 1233920 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34786,"bootTime":1763853455,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:57:20.858758 1233920 start.go:143] virtualization:  
	I1123 08:57:20.861875 1233920 out.go:179] * [default-k8s-diff-port-262764] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:57:20.865648 1233920 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:57:20.865781 1233920 notify.go:221] Checking for updates...
	I1123 08:57:20.871441 1233920 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:57:20.874383 1233920 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:57:20.877379 1233920 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:57:20.880294 1233920 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:57:20.883209 1233920 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:57:20.886576 1233920 config.go:182] Loaded profile config "default-k8s-diff-port-262764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:57:20.887247 1233920 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:57:20.917189 1233920 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:57:20.917312 1233920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:57:20.978183 1233920 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:57:20.969087622 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:57:20.978289 1233920 docker.go:319] overlay module found
	I1123 08:57:20.981358 1233920 out.go:179] * Using the docker driver based on existing profile
	I1123 08:57:20.984266 1233920 start.go:309] selected driver: docker
	I1123 08:57:20.984283 1233920 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-262764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-262764 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:57:20.984391 1233920 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:57:20.985109 1233920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:57:21.050293 1233920 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:57:21.040373159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:57:21.050618 1233920 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:57:21.050652 1233920 cni.go:84] Creating CNI manager for ""
	I1123 08:57:21.050714 1233920 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:57:21.050758 1233920 start.go:353] cluster config:
	{Name:default-k8s-diff-port-262764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-262764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:57:21.053830 1233920 out.go:179] * Starting "default-k8s-diff-port-262764" primary control-plane node in "default-k8s-diff-port-262764" cluster
	I1123 08:57:21.056543 1233920 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:57:21.059573 1233920 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:57:21.062497 1233920 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:57:21.062540 1233920 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:57:21.062575 1233920 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 08:57:21.062584 1233920 cache.go:65] Caching tarball of preloaded images
	I1123 08:57:21.062660 1233920 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:57:21.062669 1233920 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:57:21.062776 1233920 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/config.json ...
	I1123 08:57:21.080090 1233920 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:57:21.080114 1233920 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:57:21.080129 1233920 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:57:21.080158 1233920 start.go:360] acquireMachinesLock for default-k8s-diff-port-262764: {Name:mkf0c163a908f6958bb9c8bee697ca96661f1525 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:57:21.080217 1233920 start.go:364] duration metric: took 37.02µs to acquireMachinesLock for "default-k8s-diff-port-262764"
	I1123 08:57:21.080241 1233920 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:57:21.080247 1233920 fix.go:54] fixHost starting: 
	I1123 08:57:21.080499 1233920 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-262764 --format={{.State.Status}}
	I1123 08:57:21.097141 1233920 fix.go:112] recreateIfNeeded on default-k8s-diff-port-262764: state=Stopped err=<nil>
	W1123 08:57:21.097191 1233920 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 08:57:19.836804 1230335 node_ready.go:57] node "embed-certs-879861" has "Ready":"False" status (will retry)
	W1123 08:57:21.837009 1230335 node_ready.go:57] node "embed-certs-879861" has "Ready":"False" status (will retry)
	I1123 08:57:21.100434 1233920 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-262764" ...
	I1123 08:57:21.100527 1233920 cli_runner.go:164] Run: docker start default-k8s-diff-port-262764
	I1123 08:57:21.370597 1233920 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-262764 --format={{.State.Status}}
	I1123 08:57:21.391415 1233920 kic.go:430] container "default-k8s-diff-port-262764" state is running.
	I1123 08:57:21.391801 1233920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-262764
	I1123 08:57:21.414074 1233920 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/config.json ...
	I1123 08:57:21.414300 1233920 machine.go:94] provisionDockerMachine start ...
	I1123 08:57:21.414369 1233920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:57:21.433969 1233920 main.go:143] libmachine: Using SSH client type: native
	I1123 08:57:21.434298 1233920 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34532 <nil> <nil>}
	I1123 08:57:21.434306 1233920 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:57:21.435583 1233920 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:57:24.590482 1233920 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-262764
	
	I1123 08:57:24.590510 1233920 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-262764"
	I1123 08:57:24.590573 1233920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:57:24.607831 1233920 main.go:143] libmachine: Using SSH client type: native
	I1123 08:57:24.608167 1233920 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34532 <nil> <nil>}
	I1123 08:57:24.608185 1233920 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-262764 && echo "default-k8s-diff-port-262764" | sudo tee /etc/hostname
	I1123 08:57:24.768850 1233920 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-262764
	
	I1123 08:57:24.768925 1233920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:57:24.787489 1233920 main.go:143] libmachine: Using SSH client type: native
	I1123 08:57:24.787798 1233920 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34532 <nil> <nil>}
	I1123 08:57:24.787820 1233920 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-262764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-262764/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-262764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:57:24.940502 1233920 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:57:24.940532 1233920 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:57:24.940565 1233920 ubuntu.go:190] setting up certificates
	I1123 08:57:24.940575 1233920 provision.go:84] configureAuth start
	I1123 08:57:24.940646 1233920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-262764
	I1123 08:57:24.958904 1233920 provision.go:143] copyHostCerts
	I1123 08:57:24.958975 1233920 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:57:24.958997 1233920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:57:24.959077 1233920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:57:24.959347 1233920 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:57:24.959362 1233920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:57:24.959396 1233920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:57:24.959468 1233920 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:57:24.959478 1233920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:57:24.959502 1233920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:57:24.959555 1233920 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-262764 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-262764 localhost minikube]
	I1123 08:57:25.142489 1233920 provision.go:177] copyRemoteCerts
	I1123 08:57:25.142560 1233920 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:57:25.142623 1233920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:57:25.160560 1233920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/default-k8s-diff-port-262764/id_rsa Username:docker}
	I1123 08:57:25.266960 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:57:25.284283 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 08:57:25.301485 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:57:25.326963 1233920 provision.go:87] duration metric: took 386.365643ms to configureAuth
	I1123 08:57:25.326995 1233920 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:57:25.327229 1233920 config.go:182] Loaded profile config "default-k8s-diff-port-262764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:57:25.327340 1233920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:57:25.345237 1233920 main.go:143] libmachine: Using SSH client type: native
	I1123 08:57:25.345548 1233920 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34532 <nil> <nil>}
	I1123 08:57:25.345576 1233920 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:57:25.698230 1233920 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:57:25.698266 1233920 machine.go:97] duration metric: took 4.283948348s to provisionDockerMachine
	I1123 08:57:25.698278 1233920 start.go:293] postStartSetup for "default-k8s-diff-port-262764" (driver="docker")
	I1123 08:57:25.698288 1233920 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:57:25.698354 1233920 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:57:25.698415 1233920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:57:25.722435 1233920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/default-k8s-diff-port-262764/id_rsa Username:docker}
	I1123 08:57:25.826991 1233920 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:57:25.830211 1233920 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:57:25.830237 1233920 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:57:25.830248 1233920 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:57:25.830306 1233920 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:57:25.830398 1233920 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:57:25.830502 1233920 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:57:25.838643 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:57:25.855641 1233920 start.go:296] duration metric: took 157.348079ms for postStartSetup
	I1123 08:57:25.855740 1233920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:57:25.855784 1233920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:57:25.872135 1233920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/default-k8s-diff-port-262764/id_rsa Username:docker}
	I1123 08:57:25.977061 1233920 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:57:25.981607 1233920 fix.go:56] duration metric: took 4.901354s for fixHost
	I1123 08:57:25.981632 1233920 start.go:83] releasing machines lock for "default-k8s-diff-port-262764", held for 4.901400521s
	I1123 08:57:25.981716 1233920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-262764
	I1123 08:57:25.997918 1233920 ssh_runner.go:195] Run: cat /version.json
	I1123 08:57:25.997989 1233920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:57:25.998240 1233920 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:57:25.998300 1233920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:57:26.030736 1233920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/default-k8s-diff-port-262764/id_rsa Username:docker}
	I1123 08:57:26.045307 1233920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/default-k8s-diff-port-262764/id_rsa Username:docker}
	I1123 08:57:26.134727 1233920 ssh_runner.go:195] Run: systemctl --version
	I1123 08:57:26.229790 1233920 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:57:26.273001 1233920 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:57:26.277794 1233920 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:57:26.277862 1233920 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:57:26.286648 1233920 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:57:26.286671 1233920 start.go:496] detecting cgroup driver to use...
	I1123 08:57:26.286722 1233920 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:57:26.286792 1233920 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:57:26.302327 1233920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:57:26.320232 1233920 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:57:26.320330 1233920 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:57:26.336474 1233920 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:57:26.350059 1233920 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:57:26.456990 1233920 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:57:26.587803 1233920 docker.go:234] disabling docker service ...
	I1123 08:57:26.587897 1233920 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:57:26.603953 1233920 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:57:26.620418 1233920 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:57:26.732537 1233920 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:57:26.850828 1233920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:57:26.864200 1233920 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:57:26.879359 1233920 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:57:26.879453 1233920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:57:26.888054 1233920 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:57:26.888121 1233920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:57:26.896857 1233920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:57:26.905923 1233920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:57:26.915132 1233920 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:57:26.923627 1233920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:57:26.934119 1233920 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:57:26.942652 1233920 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:57:26.951725 1233920 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:57:26.959122 1233920 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:57:26.966133 1233920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:57:27.095942 1233920 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:57:27.269791 1233920 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:57:27.269871 1233920 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:57:27.273705 1233920 start.go:564] Will wait 60s for crictl version
	I1123 08:57:27.273768 1233920 ssh_runner.go:195] Run: which crictl
	I1123 08:57:27.277341 1233920 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:57:27.303732 1233920 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:57:27.303828 1233920 ssh_runner.go:195] Run: crio --version
	I1123 08:57:27.333648 1233920 ssh_runner.go:195] Run: crio --version
	I1123 08:57:27.366900 1233920 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1123 08:57:23.837174 1230335 node_ready.go:57] node "embed-certs-879861" has "Ready":"False" status (will retry)
	W1123 08:57:25.837262 1230335 node_ready.go:57] node "embed-certs-879861" has "Ready":"False" status (will retry)
	I1123 08:57:27.369637 1233920 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-262764 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:57:27.387200 1233920 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:57:27.391115 1233920 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:57:27.401908 1233920 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-262764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-262764 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:57:27.402028 1233920 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:57:27.402087 1233920 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:57:27.438424 1233920 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:57:27.438445 1233920 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:57:27.438498 1233920 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:57:27.464563 1233920 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:57:27.464588 1233920 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:57:27.464596 1233920 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1123 08:57:27.464700 1233920 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-262764 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-262764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:57:27.464780 1233920 ssh_runner.go:195] Run: crio config
	I1123 08:57:27.523410 1233920 cni.go:84] Creating CNI manager for ""
	I1123 08:57:27.523433 1233920 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:57:27.523456 1233920 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:57:27.523479 1233920 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-262764 NodeName:default-k8s-diff-port-262764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:57:27.523598 1233920 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-262764"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:57:27.523674 1233920 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:57:27.531550 1233920 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:57:27.531622 1233920 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:57:27.539397 1233920 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1123 08:57:27.551942 1233920 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:57:27.563818 1233920 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1123 08:57:27.577235 1233920 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:57:27.580678 1233920 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:57:27.589810 1233920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:57:27.707221 1233920 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:57:27.729613 1233920 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764 for IP: 192.168.85.2
	I1123 08:57:27.729634 1233920 certs.go:195] generating shared ca certs ...
	I1123 08:57:27.729649 1233920 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:27.729827 1233920 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:57:27.729875 1233920 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:57:27.729893 1233920 certs.go:257] generating profile certs ...
	I1123 08:57:27.729977 1233920 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.key
	I1123 08:57:27.730046 1233920 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/apiserver.key.f9925b99
	I1123 08:57:27.730092 1233920 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/proxy-client.key
	I1123 08:57:27.730204 1233920 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:57:27.730240 1233920 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:57:27.730253 1233920 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:57:27.730285 1233920 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:57:27.730312 1233920 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:57:27.730339 1233920 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:57:27.730392 1233920 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:57:27.730983 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:57:27.751039 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:57:27.770482 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:57:27.795577 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:57:27.828157 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 08:57:27.857725 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:57:27.881992 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:57:27.904730 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:57:27.938023 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 08:57:27.957348 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:57:27.976483 1233920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:57:28.012063 1233920 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:57:28.028589 1233920 ssh_runner.go:195] Run: openssl version
	I1123 08:57:28.035634 1233920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:57:28.045355 1233920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:57:28.050280 1233920 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:57:28.050350 1233920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:57:28.092243 1233920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:57:28.100721 1233920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:57:28.108983 1233920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:57:28.112901 1233920 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:57:28.112969 1233920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:57:28.154219 1233920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:57:28.162084 1233920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:57:28.170071 1233920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:57:28.173780 1233920 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:57:28.173843 1233920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:57:28.214332 1233920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:57:28.222068 1233920 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:57:28.225518 1233920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:57:28.268028 1233920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:57:28.309976 1233920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:57:28.361755 1233920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:57:28.433846 1233920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:57:28.529349 1233920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:57:28.624503 1233920 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-262764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-262764 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:57:28.624639 1233920 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:57:28.624744 1233920 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:57:28.671559 1233920 cri.go:89] found id: "844d5c6d2fdc6889c36b9911a4a6534e4317818c129eb910010b6c0ffb4f03f7"
	I1123 08:57:28.671630 1233920 cri.go:89] found id: "9183b8d5f0167d65acae545428bcefaad15989e0187470c12fabe000b501d7b6"
	I1123 08:57:28.671648 1233920 cri.go:89] found id: "3c79c59cf7838dc0d18f5c3de6bc6a24338c907a9104ae14d156735b130d2671"
	I1123 08:57:28.671666 1233920 cri.go:89] found id: "69a0aeac491393aeac0ffcc4bc7ed28f76ff736f9b82dde46869747ff492411b"
	I1123 08:57:28.671702 1233920 cri.go:89] found id: ""
	I1123 08:57:28.671769 1233920 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 08:57:28.692376 1233920 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:57:28Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:57:28.692532 1233920 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:57:28.702024 1233920 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:57:28.702096 1233920 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:57:28.702182 1233920 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:57:28.711660 1233920 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:57:28.712573 1233920 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-262764" does not appear in /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:57:28.713167 1233920 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-1041293/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-262764" cluster setting kubeconfig missing "default-k8s-diff-port-262764" context setting]
	I1123 08:57:28.714123 1233920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:28.716074 1233920 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:57:28.729139 1233920 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 08:57:28.729214 1233920 kubeadm.go:602] duration metric: took 27.098067ms to restartPrimaryControlPlane
	I1123 08:57:28.729237 1233920 kubeadm.go:403] duration metric: took 104.745673ms to StartCluster
	I1123 08:57:28.729281 1233920 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:28.729374 1233920 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:57:28.730945 1233920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:28.731330 1233920 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:57:28.731782 1233920 config.go:182] Loaded profile config "default-k8s-diff-port-262764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:57:28.731752 1233920 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:57:28.731956 1233920 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-262764"
	I1123 08:57:28.731997 1233920 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-262764"
	W1123 08:57:28.732021 1233920 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:57:28.732071 1233920 host.go:66] Checking if "default-k8s-diff-port-262764" exists ...
	I1123 08:57:28.732672 1233920 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-262764 --format={{.State.Status}}
	I1123 08:57:28.732869 1233920 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-262764"
	I1123 08:57:28.732914 1233920 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-262764"
	W1123 08:57:28.732937 1233920 addons.go:248] addon dashboard should already be in state true
	I1123 08:57:28.732989 1233920 host.go:66] Checking if "default-k8s-diff-port-262764" exists ...
	I1123 08:57:28.733238 1233920 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-262764"
	I1123 08:57:28.733264 1233920 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-262764"
	I1123 08:57:28.733518 1233920 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-262764 --format={{.State.Status}}
	I1123 08:57:28.733529 1233920 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-262764 --format={{.State.Status}}
	I1123 08:57:28.737356 1233920 out.go:179] * Verifying Kubernetes components...
	I1123 08:57:28.746043 1233920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:57:28.774080 1233920 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:57:28.776923 1233920 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 08:57:28.779734 1233920 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:57:28.779759 1233920 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:57:28.779828 1233920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:57:28.804508 1233920 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-262764"
	W1123 08:57:28.804529 1233920 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:57:28.804554 1233920 host.go:66] Checking if "default-k8s-diff-port-262764" exists ...
	I1123 08:57:28.804968 1233920 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-262764 --format={{.State.Status}}
	I1123 08:57:28.815238 1233920 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:57:28.818403 1233920 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:57:28.818426 1233920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:57:28.818497 1233920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:57:28.832674 1233920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/default-k8s-diff-port-262764/id_rsa Username:docker}
	I1123 08:57:28.859486 1233920 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:57:28.859508 1233920 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:57:28.859570 1233920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:57:28.861096 1233920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/default-k8s-diff-port-262764/id_rsa Username:docker}
	I1123 08:57:28.888922 1233920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/default-k8s-diff-port-262764/id_rsa Username:docker}
	I1123 08:57:29.095048 1233920 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:57:29.112938 1233920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:57:29.141399 1233920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:57:29.195834 1233920 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-262764" to be "Ready" ...
	I1123 08:57:29.202029 1233920 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:57:29.202056 1233920 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:57:29.286454 1233920 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:57:29.286478 1233920 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:57:29.380269 1233920 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:57:29.380295 1233920 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:57:29.402449 1233920 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:57:29.402471 1233920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:57:29.433270 1233920 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:57:29.433295 1233920 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:57:29.457785 1233920 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:57:29.457808 1233920 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:57:29.480323 1233920 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:57:29.480347 1233920 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:57:29.508445 1233920 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:57:29.508469 1233920 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:57:29.528338 1233920 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:57:29.528363 1233920 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:57:29.548408 1233920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1123 08:57:28.337822 1230335 node_ready.go:57] node "embed-certs-879861" has "Ready":"False" status (will retry)
	W1123 08:57:30.837551 1230335 node_ready.go:57] node "embed-certs-879861" has "Ready":"False" status (will retry)
	I1123 08:57:32.836474 1230335 node_ready.go:49] node "embed-certs-879861" is "Ready"
	I1123 08:57:32.836505 1230335 node_ready.go:38] duration metric: took 40.502552745s for node "embed-certs-879861" to be "Ready" ...
	I1123 08:57:32.836518 1230335 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:57:32.836581 1230335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:57:32.852847 1230335 api_server.go:72] duration metric: took 41.473233473s to wait for apiserver process to appear ...
	I1123 08:57:32.852884 1230335 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:57:32.852903 1230335 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:57:32.863392 1230335 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 08:57:32.864627 1230335 api_server.go:141] control plane version: v1.34.1
	I1123 08:57:32.864653 1230335 api_server.go:131] duration metric: took 11.761354ms to wait for apiserver health ...
	I1123 08:57:32.864663 1230335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:57:32.872820 1230335 system_pods.go:59] 8 kube-system pods found
	I1123 08:57:32.872858 1230335 system_pods.go:61] "coredns-66bc5c9577-r5lt5" [c470da65-70be-4126-90eb-0434f6668546] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:32.872867 1230335 system_pods.go:61] "etcd-embed-certs-879861" [bfcc5c7b-69bf-4a5e-a473-ec3b9d4c1a98] Running
	I1123 08:57:32.872872 1230335 system_pods.go:61] "kindnet-f6j8g" [973f09b1-28dd-40ea-9180-85020f65a04e] Running
	I1123 08:57:32.872879 1230335 system_pods.go:61] "kube-apiserver-embed-certs-879861" [d3d9369f-cc37-484a-a5b9-bbe97c1b1a51] Running
	I1123 08:57:32.872883 1230335 system_pods.go:61] "kube-controller-manager-embed-certs-879861" [02779370-efc5-438a-a94c-4fc12286c2fd] Running
	I1123 08:57:32.872887 1230335 system_pods.go:61] "kube-proxy-bf5ck" [37c2f985-65de-4d46-955d-3767fe0f32a2] Running
	I1123 08:57:32.872891 1230335 system_pods.go:61] "kube-scheduler-embed-certs-879861" [dab432a6-c8f8-4282-b842-bf07ca17e9e5] Running
	I1123 08:57:32.872897 1230335 system_pods.go:61] "storage-provisioner" [cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:57:32.872906 1230335 system_pods.go:74] duration metric: took 8.236841ms to wait for pod list to return data ...
	I1123 08:57:32.872919 1230335 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:57:32.877429 1230335 default_sa.go:45] found service account: "default"
	I1123 08:57:32.877466 1230335 default_sa.go:55] duration metric: took 4.539951ms for default service account to be created ...
	I1123 08:57:32.877476 1230335 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:57:32.885427 1230335 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:32.885460 1230335 system_pods.go:89] "coredns-66bc5c9577-r5lt5" [c470da65-70be-4126-90eb-0434f6668546] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:32.885467 1230335 system_pods.go:89] "etcd-embed-certs-879861" [bfcc5c7b-69bf-4a5e-a473-ec3b9d4c1a98] Running
	I1123 08:57:32.885473 1230335 system_pods.go:89] "kindnet-f6j8g" [973f09b1-28dd-40ea-9180-85020f65a04e] Running
	I1123 08:57:32.885477 1230335 system_pods.go:89] "kube-apiserver-embed-certs-879861" [d3d9369f-cc37-484a-a5b9-bbe97c1b1a51] Running
	I1123 08:57:32.885483 1230335 system_pods.go:89] "kube-controller-manager-embed-certs-879861" [02779370-efc5-438a-a94c-4fc12286c2fd] Running
	I1123 08:57:32.885487 1230335 system_pods.go:89] "kube-proxy-bf5ck" [37c2f985-65de-4d46-955d-3767fe0f32a2] Running
	I1123 08:57:32.885498 1230335 system_pods.go:89] "kube-scheduler-embed-certs-879861" [dab432a6-c8f8-4282-b842-bf07ca17e9e5] Running
	I1123 08:57:32.885508 1230335 system_pods.go:89] "storage-provisioner" [cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:57:32.885535 1230335 retry.go:31] will retry after 261.32577ms: missing components: kube-dns
	I1123 08:57:33.155853 1230335 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:33.155898 1230335 system_pods.go:89] "coredns-66bc5c9577-r5lt5" [c470da65-70be-4126-90eb-0434f6668546] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:33.155906 1230335 system_pods.go:89] "etcd-embed-certs-879861" [bfcc5c7b-69bf-4a5e-a473-ec3b9d4c1a98] Running
	I1123 08:57:33.155914 1230335 system_pods.go:89] "kindnet-f6j8g" [973f09b1-28dd-40ea-9180-85020f65a04e] Running
	I1123 08:57:33.155919 1230335 system_pods.go:89] "kube-apiserver-embed-certs-879861" [d3d9369f-cc37-484a-a5b9-bbe97c1b1a51] Running
	I1123 08:57:33.155923 1230335 system_pods.go:89] "kube-controller-manager-embed-certs-879861" [02779370-efc5-438a-a94c-4fc12286c2fd] Running
	I1123 08:57:33.155927 1230335 system_pods.go:89] "kube-proxy-bf5ck" [37c2f985-65de-4d46-955d-3767fe0f32a2] Running
	I1123 08:57:33.155932 1230335 system_pods.go:89] "kube-scheduler-embed-certs-879861" [dab432a6-c8f8-4282-b842-bf07ca17e9e5] Running
	I1123 08:57:33.155949 1230335 system_pods.go:89] "storage-provisioner" [cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:57:33.155978 1230335 retry.go:31] will retry after 362.213908ms: missing components: kube-dns
	I1123 08:57:33.672219 1233920 node_ready.go:49] node "default-k8s-diff-port-262764" is "Ready"
	I1123 08:57:33.672295 1233920 node_ready.go:38] duration metric: took 4.476393766s for node "default-k8s-diff-port-262764" to be "Ready" ...
	I1123 08:57:33.672342 1233920 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:57:33.672438 1233920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:57:33.824414 1233920 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.71139131s)
	I1123 08:57:35.413479 1233920 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.271993112s)
	I1123 08:57:35.413597 1233920 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.86515784s)
	I1123 08:57:35.413816 1233920 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.741348543s)
	I1123 08:57:35.413877 1233920 api_server.go:72] duration metric: took 6.682495628s to wait for apiserver process to appear ...
	I1123 08:57:35.413901 1233920 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:57:35.413947 1233920 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:57:35.417059 1233920 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-262764 addons enable metrics-server
	
	I1123 08:57:35.419914 1233920 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 08:57:35.423480 1233920 addons.go:530] duration metric: took 6.691736995s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 08:57:35.424258 1233920 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:57:35.425479 1233920 api_server.go:141] control plane version: v1.34.1
	I1123 08:57:35.425504 1233920 api_server.go:131] duration metric: took 11.584251ms to wait for apiserver health ...
	I1123 08:57:35.425512 1233920 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:57:35.429380 1233920 system_pods.go:59] 8 kube-system pods found
	I1123 08:57:35.429447 1233920 system_pods.go:61] "coredns-66bc5c9577-mmrrf" [6e362045-d2e1-48cc-9e1d-c6b0dfa33477] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:35.429470 1233920 system_pods.go:61] "etcd-default-k8s-diff-port-262764" [4021e039-a3e2-4640-b525-61ca05c4f826] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:57:35.429491 1233920 system_pods.go:61] "kindnet-xsm2q" [ddc99fbd-3077-4564-9af7-f3d3cb84526a] Running
	I1123 08:57:35.429536 1233920 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-262764" [0ccc6c56-33cc-434f-b7af-28ea71874781] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:57:35.429558 1233920 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-262764" [32313b16-19a7-4780-b720-a4fbfede7d6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:57:35.429578 1233920 system_pods.go:61] "kube-proxy-9thkr" [2cfa1824-511c-4e8b-8bc1-551bda9a3767] Running
	I1123 08:57:35.429611 1233920 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-262764" [4ad18014-f007-4491-a54a-991aedaddbef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:57:35.429634 1233920 system_pods.go:61] "storage-provisioner" [b14064f1-d4ac-44c3-8eff-4854e3c5615e] Running
	I1123 08:57:35.429655 1233920 system_pods.go:74] duration metric: took 4.136369ms to wait for pod list to return data ...
	I1123 08:57:35.429688 1233920 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:57:35.433277 1233920 default_sa.go:45] found service account: "default"
	I1123 08:57:35.433325 1233920 default_sa.go:55] duration metric: took 3.614512ms for default service account to be created ...
	I1123 08:57:35.433363 1233920 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:57:35.437026 1233920 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:35.437109 1233920 system_pods.go:89] "coredns-66bc5c9577-mmrrf" [6e362045-d2e1-48cc-9e1d-c6b0dfa33477] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:35.437133 1233920 system_pods.go:89] "etcd-default-k8s-diff-port-262764" [4021e039-a3e2-4640-b525-61ca05c4f826] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:57:35.437167 1233920 system_pods.go:89] "kindnet-xsm2q" [ddc99fbd-3077-4564-9af7-f3d3cb84526a] Running
	I1123 08:57:35.437192 1233920 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-262764" [0ccc6c56-33cc-434f-b7af-28ea71874781] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:57:35.437215 1233920 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-262764" [32313b16-19a7-4780-b720-a4fbfede7d6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:57:35.437251 1233920 system_pods.go:89] "kube-proxy-9thkr" [2cfa1824-511c-4e8b-8bc1-551bda9a3767] Running
	I1123 08:57:35.437277 1233920 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-262764" [4ad18014-f007-4491-a54a-991aedaddbef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:57:35.437297 1233920 system_pods.go:89] "storage-provisioner" [b14064f1-d4ac-44c3-8eff-4854e3c5615e] Running
	I1123 08:57:35.437333 1233920 system_pods.go:126] duration metric: took 3.945629ms to wait for k8s-apps to be running ...
	I1123 08:57:35.437358 1233920 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:57:35.437440 1233920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:57:35.452555 1233920 system_svc.go:56] duration metric: took 15.189697ms WaitForService to wait for kubelet
	I1123 08:57:35.452632 1233920 kubeadm.go:587] duration metric: took 6.721248483s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:57:35.452682 1233920 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:57:35.455464 1233920 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:57:35.455537 1233920 node_conditions.go:123] node cpu capacity is 2
	I1123 08:57:35.455583 1233920 node_conditions.go:105] duration metric: took 2.883776ms to run NodePressure ...
	I1123 08:57:35.455607 1233920 start.go:242] waiting for startup goroutines ...
	I1123 08:57:35.455641 1233920 start.go:247] waiting for cluster config update ...
	I1123 08:57:35.455671 1233920 start.go:256] writing updated cluster config ...
	I1123 08:57:35.456013 1233920 ssh_runner.go:195] Run: rm -f paused
	I1123 08:57:35.459970 1233920 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:57:35.529312 1233920 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mmrrf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:33.522714 1230335 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:33.522758 1230335 system_pods.go:89] "coredns-66bc5c9577-r5lt5" [c470da65-70be-4126-90eb-0434f6668546] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:33.522766 1230335 system_pods.go:89] "etcd-embed-certs-879861" [bfcc5c7b-69bf-4a5e-a473-ec3b9d4c1a98] Running
	I1123 08:57:33.522776 1230335 system_pods.go:89] "kindnet-f6j8g" [973f09b1-28dd-40ea-9180-85020f65a04e] Running
	I1123 08:57:33.522780 1230335 system_pods.go:89] "kube-apiserver-embed-certs-879861" [d3d9369f-cc37-484a-a5b9-bbe97c1b1a51] Running
	I1123 08:57:33.522785 1230335 system_pods.go:89] "kube-controller-manager-embed-certs-879861" [02779370-efc5-438a-a94c-4fc12286c2fd] Running
	I1123 08:57:33.522791 1230335 system_pods.go:89] "kube-proxy-bf5ck" [37c2f985-65de-4d46-955d-3767fe0f32a2] Running
	I1123 08:57:33.522799 1230335 system_pods.go:89] "kube-scheduler-embed-certs-879861" [dab432a6-c8f8-4282-b842-bf07ca17e9e5] Running
	I1123 08:57:33.522817 1230335 system_pods.go:89] "storage-provisioner" [cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:57:33.522837 1230335 retry.go:31] will retry after 449.126211ms: missing components: kube-dns
	I1123 08:57:33.983852 1230335 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:33.983894 1230335 system_pods.go:89] "coredns-66bc5c9577-r5lt5" [c470da65-70be-4126-90eb-0434f6668546] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:33.983900 1230335 system_pods.go:89] "etcd-embed-certs-879861" [bfcc5c7b-69bf-4a5e-a473-ec3b9d4c1a98] Running
	I1123 08:57:33.983907 1230335 system_pods.go:89] "kindnet-f6j8g" [973f09b1-28dd-40ea-9180-85020f65a04e] Running
	I1123 08:57:33.983911 1230335 system_pods.go:89] "kube-apiserver-embed-certs-879861" [d3d9369f-cc37-484a-a5b9-bbe97c1b1a51] Running
	I1123 08:57:33.983916 1230335 system_pods.go:89] "kube-controller-manager-embed-certs-879861" [02779370-efc5-438a-a94c-4fc12286c2fd] Running
	I1123 08:57:33.983920 1230335 system_pods.go:89] "kube-proxy-bf5ck" [37c2f985-65de-4d46-955d-3767fe0f32a2] Running
	I1123 08:57:33.983924 1230335 system_pods.go:89] "kube-scheduler-embed-certs-879861" [dab432a6-c8f8-4282-b842-bf07ca17e9e5] Running
	I1123 08:57:33.983928 1230335 system_pods.go:89] "storage-provisioner" [cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89] Running
	I1123 08:57:33.983958 1230335 retry.go:31] will retry after 580.73841ms: missing components: kube-dns
	I1123 08:57:34.568932 1230335 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:34.568963 1230335 system_pods.go:89] "coredns-66bc5c9577-r5lt5" [c470da65-70be-4126-90eb-0434f6668546] Running
	I1123 08:57:34.568969 1230335 system_pods.go:89] "etcd-embed-certs-879861" [bfcc5c7b-69bf-4a5e-a473-ec3b9d4c1a98] Running
	I1123 08:57:34.568974 1230335 system_pods.go:89] "kindnet-f6j8g" [973f09b1-28dd-40ea-9180-85020f65a04e] Running
	I1123 08:57:34.568987 1230335 system_pods.go:89] "kube-apiserver-embed-certs-879861" [d3d9369f-cc37-484a-a5b9-bbe97c1b1a51] Running
	I1123 08:57:34.568993 1230335 system_pods.go:89] "kube-controller-manager-embed-certs-879861" [02779370-efc5-438a-a94c-4fc12286c2fd] Running
	I1123 08:57:34.568997 1230335 system_pods.go:89] "kube-proxy-bf5ck" [37c2f985-65de-4d46-955d-3767fe0f32a2] Running
	I1123 08:57:34.569002 1230335 system_pods.go:89] "kube-scheduler-embed-certs-879861" [dab432a6-c8f8-4282-b842-bf07ca17e9e5] Running
	I1123 08:57:34.569006 1230335 system_pods.go:89] "storage-provisioner" [cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89] Running
	I1123 08:57:34.569014 1230335 system_pods.go:126] duration metric: took 1.691531458s to wait for k8s-apps to be running ...
	I1123 08:57:34.569028 1230335 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:57:34.569091 1230335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:57:34.589473 1230335 system_svc.go:56] duration metric: took 20.434154ms WaitForService to wait for kubelet
	I1123 08:57:34.589503 1230335 kubeadm.go:587] duration metric: took 43.209893708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:57:34.589521 1230335 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:57:34.593391 1230335 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:57:34.593430 1230335 node_conditions.go:123] node cpu capacity is 2
	I1123 08:57:34.593444 1230335 node_conditions.go:105] duration metric: took 3.918381ms to run NodePressure ...
	I1123 08:57:34.593457 1230335 start.go:242] waiting for startup goroutines ...
	I1123 08:57:34.593465 1230335 start.go:247] waiting for cluster config update ...
	I1123 08:57:34.593476 1230335 start.go:256] writing updated cluster config ...
	I1123 08:57:34.593744 1230335 ssh_runner.go:195] Run: rm -f paused
	I1123 08:57:34.597560 1230335 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:57:34.601472 1230335 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r5lt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:34.609541 1230335 pod_ready.go:94] pod "coredns-66bc5c9577-r5lt5" is "Ready"
	I1123 08:57:34.609570 1230335 pod_ready.go:86] duration metric: took 8.07257ms for pod "coredns-66bc5c9577-r5lt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:34.611993 1230335 pod_ready.go:83] waiting for pod "etcd-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:34.617834 1230335 pod_ready.go:94] pod "etcd-embed-certs-879861" is "Ready"
	I1123 08:57:34.617861 1230335 pod_ready.go:86] duration metric: took 5.844029ms for pod "etcd-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:34.624538 1230335 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:34.628719 1230335 pod_ready.go:94] pod "kube-apiserver-embed-certs-879861" is "Ready"
	I1123 08:57:34.628745 1230335 pod_ready.go:86] duration metric: took 4.181652ms for pod "kube-apiserver-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:34.631601 1230335 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:35.001838 1230335 pod_ready.go:94] pod "kube-controller-manager-embed-certs-879861" is "Ready"
	I1123 08:57:35.001882 1230335 pod_ready.go:86] duration metric: took 370.256276ms for pod "kube-controller-manager-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:35.203236 1230335 pod_ready.go:83] waiting for pod "kube-proxy-bf5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:35.602471 1230335 pod_ready.go:94] pod "kube-proxy-bf5ck" is "Ready"
	I1123 08:57:35.602545 1230335 pod_ready.go:86] duration metric: took 399.271563ms for pod "kube-proxy-bf5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:35.801780 1230335 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:36.202372 1230335 pod_ready.go:94] pod "kube-scheduler-embed-certs-879861" is "Ready"
	I1123 08:57:36.202402 1230335 pod_ready.go:86] duration metric: took 400.595013ms for pod "kube-scheduler-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:36.202414 1230335 pod_ready.go:40] duration metric: took 1.604822775s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:57:36.263984 1230335 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:57:36.268888 1230335 out.go:179] * Done! kubectl is now configured to use "embed-certs-879861" cluster and "default" namespace by default
	W1123 08:57:37.545666 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	W1123 08:57:40.039381 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	W1123 08:57:42.534686 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	W1123 08:57:44.539130 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 23 08:57:33 embed-certs-879861 crio[834]: time="2025-11-23T08:57:33.186451776Z" level=info msg="Created container abdf6337cb96cfcb95aa3a46933042e5241c3faf918f35d934ad2bb6ddf80003: kube-system/coredns-66bc5c9577-r5lt5/coredns" id=b736508c-9baa-425a-969d-1f82fb21aead name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:57:33 embed-certs-879861 crio[834]: time="2025-11-23T08:57:33.187152827Z" level=info msg="Starting container: abdf6337cb96cfcb95aa3a46933042e5241c3faf918f35d934ad2bb6ddf80003" id=8408d9e4-dacd-4cb2-8266-7fc24cf9b6f8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:57:33 embed-certs-879861 crio[834]: time="2025-11-23T08:57:33.188899166Z" level=info msg="Started container" PID=1749 containerID=abdf6337cb96cfcb95aa3a46933042e5241c3faf918f35d934ad2bb6ddf80003 description=kube-system/coredns-66bc5c9577-r5lt5/coredns id=8408d9e4-dacd-4cb2-8266-7fc24cf9b6f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=efbbe76221e1ccad7efaffc33d986c8cc966424485cbdbeadb49e1c440803cad
	Nov 23 08:57:36 embed-certs-879861 crio[834]: time="2025-11-23T08:57:36.79435855Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e2374ebc-cdea-4aa7-a90a-52d06603062c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:57:36 embed-certs-879861 crio[834]: time="2025-11-23T08:57:36.79443909Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:57:36 embed-certs-879861 crio[834]: time="2025-11-23T08:57:36.799896545Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:886ccc01380af1b29481f032e61caaccffb9bc4b9ec5bb8e85dea3b651571f1b UID:58c79ac6-29f0-45fb-951d-e92b37939a41 NetNS:/var/run/netns/5bb3300f-1e77-4e74-86a3-9c1fe8c28af1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000cd098}] Aliases:map[]}"
	Nov 23 08:57:36 embed-certs-879861 crio[834]: time="2025-11-23T08:57:36.800101881Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 08:57:36 embed-certs-879861 crio[834]: time="2025-11-23T08:57:36.810621268Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:886ccc01380af1b29481f032e61caaccffb9bc4b9ec5bb8e85dea3b651571f1b UID:58c79ac6-29f0-45fb-951d-e92b37939a41 NetNS:/var/run/netns/5bb3300f-1e77-4e74-86a3-9c1fe8c28af1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000cd098}] Aliases:map[]}"
	Nov 23 08:57:36 embed-certs-879861 crio[834]: time="2025-11-23T08:57:36.810911286Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 08:57:36 embed-certs-879861 crio[834]: time="2025-11-23T08:57:36.820728087Z" level=info msg="Ran pod sandbox 886ccc01380af1b29481f032e61caaccffb9bc4b9ec5bb8e85dea3b651571f1b with infra container: default/busybox/POD" id=e2374ebc-cdea-4aa7-a90a-52d06603062c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:57:36 embed-certs-879861 crio[834]: time="2025-11-23T08:57:36.821771783Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b42d7d59-4488-410d-99f9-20290d19fee8 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:57:36 embed-certs-879861 crio[834]: time="2025-11-23T08:57:36.821888391Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b42d7d59-4488-410d-99f9-20290d19fee8 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:57:36 embed-certs-879861 crio[834]: time="2025-11-23T08:57:36.821924607Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b42d7d59-4488-410d-99f9-20290d19fee8 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:57:36 embed-certs-879861 crio[834]: time="2025-11-23T08:57:36.822901934Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=01256832-eb02-490a-8014-85cf838baff5 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:57:36 embed-certs-879861 crio[834]: time="2025-11-23T08:57:36.824521499Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:57:38 embed-certs-879861 crio[834]: time="2025-11-23T08:57:38.939748404Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=01256832-eb02-490a-8014-85cf838baff5 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:57:38 embed-certs-879861 crio[834]: time="2025-11-23T08:57:38.940528977Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=090a1dba-5292-4eb4-903e-bea24e58a79e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:57:38 embed-certs-879861 crio[834]: time="2025-11-23T08:57:38.944727326Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=14be8b20-c664-4e1a-89a5-c5a3c02e5117 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:57:38 embed-certs-879861 crio[834]: time="2025-11-23T08:57:38.952999308Z" level=info msg="Creating container: default/busybox/busybox" id=b0eed88f-692c-4501-b733-26141242fca3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:57:38 embed-certs-879861 crio[834]: time="2025-11-23T08:57:38.953148572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:57:38 embed-certs-879861 crio[834]: time="2025-11-23T08:57:38.961109022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:57:38 embed-certs-879861 crio[834]: time="2025-11-23T08:57:38.961594892Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:57:38 embed-certs-879861 crio[834]: time="2025-11-23T08:57:38.981577743Z" level=info msg="Created container 0914d0940fe9d4edc61dadb7c736836020f242834c93f0f09142e4753c2b13e5: default/busybox/busybox" id=b0eed88f-692c-4501-b733-26141242fca3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:57:38 embed-certs-879861 crio[834]: time="2025-11-23T08:57:38.985095602Z" level=info msg="Starting container: 0914d0940fe9d4edc61dadb7c736836020f242834c93f0f09142e4753c2b13e5" id=de2756e8-4a63-4871-9eeb-ca9452b5e74e name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:57:38 embed-certs-879861 crio[834]: time="2025-11-23T08:57:38.98919545Z" level=info msg="Started container" PID=1806 containerID=0914d0940fe9d4edc61dadb7c736836020f242834c93f0f09142e4753c2b13e5 description=default/busybox/busybox id=de2756e8-4a63-4871-9eeb-ca9452b5e74e name=/runtime.v1.RuntimeService/StartContainer sandboxID=886ccc01380af1b29481f032e61caaccffb9bc4b9ec5bb8e85dea3b651571f1b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	0914d0940fe9d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   886ccc01380af       busybox                                      default
	abdf6337cb96c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   efbbe76221e1c       coredns-66bc5c9577-r5lt5                     kube-system
	84659002e7f3a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   5be4543419e67       storage-provisioner                          kube-system
	9896f3b8db7b6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   65fe8bc25f14c       kindnet-f6j8g                                kube-system
	8036634ede1e6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      56 seconds ago       Running             kube-proxy                0                   fcedcacbf4d6a       kube-proxy-bf5ck                             kube-system
	1d1e8d4923c2c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   83ea189d1e2eb       kube-apiserver-embed-certs-879861            kube-system
	2246dea7d708f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   b0b40b17c0718       kube-scheduler-embed-certs-879861            kube-system
	d3239c3418e58       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   0eef4140b9ca7       kube-controller-manager-embed-certs-879861   kube-system
	51cdf1426da42       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   d31ede4ab1f3f       etcd-embed-certs-879861                      kube-system
	
	
	==> coredns [abdf6337cb96cfcb95aa3a46933042e5241c3faf918f35d934ad2bb6ddf80003] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42845 - 20132 "HINFO IN 7461845348071539484.2823067658015723698. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010857272s
	
	
	==> describe nodes <==
	Name:               embed-certs-879861
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-879861
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=embed-certs-879861
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_56_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:56:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-879861
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:57:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:57:47 +0000   Sun, 23 Nov 2025 08:56:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:57:47 +0000   Sun, 23 Nov 2025 08:56:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:57:47 +0000   Sun, 23 Nov 2025 08:56:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:57:47 +0000   Sun, 23 Nov 2025 08:57:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-879861
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                1503fdcb-cc7b-4ade-b29c-e34b53c3598b
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-r5lt5                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-879861                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-f6j8g                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-879861             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-879861    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-bf5ck                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-879861             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 56s   kube-proxy       
	  Normal   Starting                 62s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s   kubelet          Node embed-certs-879861 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s   kubelet          Node embed-certs-879861 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s   kubelet          Node embed-certs-879861 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s   node-controller  Node embed-certs-879861 event: Registered Node embed-certs-879861 in Controller
	  Normal   NodeReady                15s   kubelet          Node embed-certs-879861 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 08:34] overlayfs: idmapped layers are currently not supported
	[Nov23 08:35] overlayfs: idmapped layers are currently not supported
	[Nov23 08:36] overlayfs: idmapped layers are currently not supported
	[Nov23 08:37] overlayfs: idmapped layers are currently not supported
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	[Nov23 08:55] overlayfs: idmapped layers are currently not supported
	[Nov23 08:56] overlayfs: idmapped layers are currently not supported
	[Nov23 08:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [51cdf1426da424f53a4d45487065c1b257a3516a7ddaf608a90078a9c78d8f97] <==
	{"level":"warn","ts":"2025-11-23T08:56:40.865632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:40.888839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:40.905217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:40.946414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:40.958333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:40.984944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:40.993582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.019514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.034678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.057510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.077823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.101018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.122293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.152419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.175149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.204414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.240359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.282518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.289776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.317720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.364832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.395804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.428806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.452134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:56:41.620282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50018","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:57:48 up  9:40,  0 user,  load average: 3.39, 3.15, 2.67
	Linux embed-certs-879861 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9896f3b8db7b6c74e35ed52f7e662c3f24c539daaeb00aa0b3038cd2b6b64ea8] <==
	I1123 08:56:51.934586       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:56:51.934969       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:56:51.935093       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:56:51.935104       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:56:51.935116       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:56:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:56:52.141411       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:56:52.141428       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:56:52.141437       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:56:52.142096       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:57:22.142058       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:57:22.142058       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:57:22.142132       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 08:57:22.142291       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 08:57:23.741519       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:57:23.741573       1 metrics.go:72] Registering metrics
	I1123 08:57:23.741642       1 controller.go:711] "Syncing nftables rules"
	I1123 08:57:32.143272       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:57:32.143324       1 main.go:301] handling current node
	I1123 08:57:42.141453       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:57:42.141505       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1d1e8d4923c2c01ddb6735210c2590c5ae1f2c7d584097991d6db3fa7cb909b8] <==
	I1123 08:56:43.117421       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:56:43.117426       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:56:43.123517       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:56:43.125917       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:56:43.141153       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:56:43.147404       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:56:43.148439       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:56:43.727979       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:56:43.734075       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:56:43.734099       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:56:44.693303       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:56:44.747042       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:56:44.847136       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:56:44.859162       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 08:56:44.860371       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:56:44.865387       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:56:45.172142       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:56:45.748891       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:56:45.775672       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:56:45.793821       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:56:50.242588       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:56:50.247062       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:56:51.056795       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:56:51.101296       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 08:57:45.653941       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:44678: use of closed network connection
	
	
	==> kube-controller-manager [d3239c3418e5855e6fc9fcce70269ec559929362783d135cece5bfa63df48520] <==
	I1123 08:56:50.151780       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:56:50.155961       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:56:50.162223       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:56:50.181816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:56:50.181860       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:56:50.182056       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:56:50.182073       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:56:50.182088       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:56:50.182329       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:56:50.182426       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:56:50.182632       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:56:50.182805       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:56:50.182853       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:56:50.183168       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-879861"
	I1123 08:56:50.183258       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:56:50.183661       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:56:50.183697       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:56:50.184799       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:56:50.185068       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:56:50.187250       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:56:50.187331       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:56:50.188375       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:56:50.188445       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:56:50.202483       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:57:35.190379       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8036634ede1e63df42d157bea77e2892e77234619d80e009943211e5729b4c37] <==
	I1123 08:56:51.730637       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:56:51.796828       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:56:51.897138       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:56:51.897174       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 08:56:51.897299       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:56:51.924507       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:56:51.924558       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:56:51.942885       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:56:51.943255       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:56:51.943273       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:56:51.945031       1 config.go:200] "Starting service config controller"
	I1123 08:56:51.945040       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:56:51.945059       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:56:51.945063       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:56:51.945074       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:56:51.945078       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:56:51.945679       1 config.go:309] "Starting node config controller"
	I1123 08:56:51.945686       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:56:51.945693       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:56:52.045919       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:56:52.045991       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:56:52.046027       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2246dea7d708f58598774516baf196f4042547c9a35818c04dc7e6627f3b79a8] <==
	I1123 08:56:44.349061       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:56:44.351907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:56:44.352341       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:56:44.355300       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:56:44.352363       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 08:56:44.361478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:56:44.364173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 08:56:44.367518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:56:44.367654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:56:44.367744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:56:44.367858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:56:44.367968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:56:44.368893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:56:44.369204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:56:44.369319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:56:44.369585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:56:44.369735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:56:44.369828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:56:44.369931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:56:44.370038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:56:44.370144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:56:44.370241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:56:44.370335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:56:44.370445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1123 08:56:45.955632       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:56:50 embed-certs-879861 kubelet[1314]: I1123 08:56:50.210632    1314 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:56:51 embed-certs-879861 kubelet[1314]: I1123 08:56:51.179882    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/37c2f985-65de-4d46-955d-3767fe0f32a2-kube-proxy\") pod \"kube-proxy-bf5ck\" (UID: \"37c2f985-65de-4d46-955d-3767fe0f32a2\") " pod="kube-system/kube-proxy-bf5ck"
	Nov 23 08:56:51 embed-certs-879861 kubelet[1314]: I1123 08:56:51.179943    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37c2f985-65de-4d46-955d-3767fe0f32a2-lib-modules\") pod \"kube-proxy-bf5ck\" (UID: \"37c2f985-65de-4d46-955d-3767fe0f32a2\") " pod="kube-system/kube-proxy-bf5ck"
	Nov 23 08:56:51 embed-certs-879861 kubelet[1314]: I1123 08:56:51.179976    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37c2f985-65de-4d46-955d-3767fe0f32a2-xtables-lock\") pod \"kube-proxy-bf5ck\" (UID: \"37c2f985-65de-4d46-955d-3767fe0f32a2\") " pod="kube-system/kube-proxy-bf5ck"
	Nov 23 08:56:51 embed-certs-879861 kubelet[1314]: I1123 08:56:51.179996    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlf8r\" (UniqueName: \"kubernetes.io/projected/37c2f985-65de-4d46-955d-3767fe0f32a2-kube-api-access-mlf8r\") pod \"kube-proxy-bf5ck\" (UID: \"37c2f985-65de-4d46-955d-3767fe0f32a2\") " pod="kube-system/kube-proxy-bf5ck"
	Nov 23 08:56:51 embed-certs-879861 kubelet[1314]: I1123 08:56:51.280771    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/973f09b1-28dd-40ea-9180-85020f65a04e-xtables-lock\") pod \"kindnet-f6j8g\" (UID: \"973f09b1-28dd-40ea-9180-85020f65a04e\") " pod="kube-system/kindnet-f6j8g"
	Nov 23 08:56:51 embed-certs-879861 kubelet[1314]: I1123 08:56:51.281329    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/973f09b1-28dd-40ea-9180-85020f65a04e-cni-cfg\") pod \"kindnet-f6j8g\" (UID: \"973f09b1-28dd-40ea-9180-85020f65a04e\") " pod="kube-system/kindnet-f6j8g"
	Nov 23 08:56:51 embed-certs-879861 kubelet[1314]: I1123 08:56:51.281426    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/973f09b1-28dd-40ea-9180-85020f65a04e-lib-modules\") pod \"kindnet-f6j8g\" (UID: \"973f09b1-28dd-40ea-9180-85020f65a04e\") " pod="kube-system/kindnet-f6j8g"
	Nov 23 08:56:51 embed-certs-879861 kubelet[1314]: I1123 08:56:51.281518    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdpkp\" (UniqueName: \"kubernetes.io/projected/973f09b1-28dd-40ea-9180-85020f65a04e-kube-api-access-gdpkp\") pod \"kindnet-f6j8g\" (UID: \"973f09b1-28dd-40ea-9180-85020f65a04e\") " pod="kube-system/kindnet-f6j8g"
	Nov 23 08:56:51 embed-certs-879861 kubelet[1314]: I1123 08:56:51.292609    1314 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 08:56:51 embed-certs-879861 kubelet[1314]: W1123 08:56:51.493788    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/crio-fcedcacbf4d6a5402e39f3c1f48fe49464f4985adb96f378639250baf1a64770 WatchSource:0}: Error finding container fcedcacbf4d6a5402e39f3c1f48fe49464f4985adb96f378639250baf1a64770: Status 404 returned error can't find the container with id fcedcacbf4d6a5402e39f3c1f48fe49464f4985adb96f378639250baf1a64770
	Nov 23 08:56:51 embed-certs-879861 kubelet[1314]: W1123 08:56:51.786822    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/crio-65fe8bc25f14c65f43dbea149f94ace8e221698f1a706393104a6f805d2ad085 WatchSource:0}: Error finding container 65fe8bc25f14c65f43dbea149f94ace8e221698f1a706393104a6f805d2ad085: Status 404 returned error can't find the container with id 65fe8bc25f14c65f43dbea149f94ace8e221698f1a706393104a6f805d2ad085
	Nov 23 08:56:52 embed-certs-879861 kubelet[1314]: I1123 08:56:52.880980    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bf5ck" podStartSLOduration=1.88096094 podStartE2EDuration="1.88096094s" podCreationTimestamp="2025-11-23 08:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:56:51.881156278 +0000 UTC m=+6.278927905" watchObservedRunningTime="2025-11-23 08:56:52.88096094 +0000 UTC m=+7.278732550"
	Nov 23 08:56:54 embed-certs-879861 kubelet[1314]: I1123 08:56:54.255042    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-f6j8g" podStartSLOduration=3.255022807 podStartE2EDuration="3.255022807s" podCreationTimestamp="2025-11-23 08:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:56:52.882967932 +0000 UTC m=+7.280739559" watchObservedRunningTime="2025-11-23 08:56:54.255022807 +0000 UTC m=+8.652794442"
	Nov 23 08:57:32 embed-certs-879861 kubelet[1314]: I1123 08:57:32.692368    1314 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:57:32 embed-certs-879861 kubelet[1314]: I1123 08:57:32.787599    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk746\" (UniqueName: \"kubernetes.io/projected/cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89-kube-api-access-jk746\") pod \"storage-provisioner\" (UID: \"cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89\") " pod="kube-system/storage-provisioner"
	Nov 23 08:57:32 embed-certs-879861 kubelet[1314]: I1123 08:57:32.787706    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c470da65-70be-4126-90eb-0434f6668546-config-volume\") pod \"coredns-66bc5c9577-r5lt5\" (UID: \"c470da65-70be-4126-90eb-0434f6668546\") " pod="kube-system/coredns-66bc5c9577-r5lt5"
	Nov 23 08:57:32 embed-certs-879861 kubelet[1314]: I1123 08:57:32.787767    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7rn8\" (UniqueName: \"kubernetes.io/projected/c470da65-70be-4126-90eb-0434f6668546-kube-api-access-v7rn8\") pod \"coredns-66bc5c9577-r5lt5\" (UID: \"c470da65-70be-4126-90eb-0434f6668546\") " pod="kube-system/coredns-66bc5c9577-r5lt5"
	Nov 23 08:57:32 embed-certs-879861 kubelet[1314]: I1123 08:57:32.787791    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89-tmp\") pod \"storage-provisioner\" (UID: \"cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89\") " pod="kube-system/storage-provisioner"
	Nov 23 08:57:33 embed-certs-879861 kubelet[1314]: W1123 08:57:33.059829    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/crio-5be4543419e673e21afa496849edec7e2d83011a1b6361ca7a8020e4bef8c5d5 WatchSource:0}: Error finding container 5be4543419e673e21afa496849edec7e2d83011a1b6361ca7a8020e4bef8c5d5: Status 404 returned error can't find the container with id 5be4543419e673e21afa496849edec7e2d83011a1b6361ca7a8020e4bef8c5d5
	Nov 23 08:57:33 embed-certs-879861 kubelet[1314]: W1123 08:57:33.094799    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/crio-efbbe76221e1ccad7efaffc33d986c8cc966424485cbdbeadb49e1c440803cad WatchSource:0}: Error finding container efbbe76221e1ccad7efaffc33d986c8cc966424485cbdbeadb49e1c440803cad: Status 404 returned error can't find the container with id efbbe76221e1ccad7efaffc33d986c8cc966424485cbdbeadb49e1c440803cad
	Nov 23 08:57:34 embed-certs-879861 kubelet[1314]: I1123 08:57:34.034564    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.034545121 podStartE2EDuration="42.034545121s" podCreationTimestamp="2025-11-23 08:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:57:33.96227988 +0000 UTC m=+48.360051524" watchObservedRunningTime="2025-11-23 08:57:34.034545121 +0000 UTC m=+48.432316740"
	Nov 23 08:57:36 embed-certs-879861 kubelet[1314]: I1123 08:57:36.484552    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r5lt5" podStartSLOduration=45.484531892 podStartE2EDuration="45.484531892s" podCreationTimestamp="2025-11-23 08:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:57:34.035608115 +0000 UTC m=+48.433379726" watchObservedRunningTime="2025-11-23 08:57:36.484531892 +0000 UTC m=+50.882303511"
	Nov 23 08:57:36 embed-certs-879861 kubelet[1314]: I1123 08:57:36.514919    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrclf\" (UniqueName: \"kubernetes.io/projected/58c79ac6-29f0-45fb-951d-e92b37939a41-kube-api-access-wrclf\") pod \"busybox\" (UID: \"58c79ac6-29f0-45fb-951d-e92b37939a41\") " pod="default/busybox"
	Nov 23 08:57:36 embed-certs-879861 kubelet[1314]: W1123 08:57:36.814016    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/crio-886ccc01380af1b29481f032e61caaccffb9bc4b9ec5bb8e85dea3b651571f1b WatchSource:0}: Error finding container 886ccc01380af1b29481f032e61caaccffb9bc4b9ec5bb8e85dea3b651571f1b: Status 404 returned error can't find the container with id 886ccc01380af1b29481f032e61caaccffb9bc4b9ec5bb8e85dea3b651571f1b
	
	
	==> storage-provisioner [84659002e7f3a749d5092dfac5e1c94af87c69bcba75bed7915e5631e23cf36f] <==
	I1123 08:57:33.167456       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:57:33.219864       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:57:33.219998       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:57:33.224477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:33.234222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:57:33.234429       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:57:33.234607       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-879861_ece3aeb7-6e36-4afd-8b50-476401d96245!
	I1123 08:57:33.236669       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"862d7238-c68b-409a-ac2b-154a7a322a6b", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-879861_ece3aeb7-6e36-4afd-8b50-476401d96245 became leader
	W1123 08:57:33.237715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:33.259441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:57:33.336893       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-879861_ece3aeb7-6e36-4afd-8b50-476401d96245!
	W1123 08:57:35.262471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:35.267480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:37.270895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:37.277305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:39.279834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:39.286480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:41.289358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:41.294435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:43.297875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:43.304181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:45.307810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:45.323766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:47.333090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:57:47.359724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-879861 -n embed-certs-879861
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-879861 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-262764 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-262764 --alsologtostderr -v=1: exit status 80 (1.835221603s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-262764 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:58:27.517377 1239072 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:58:27.517633 1239072 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:27.517647 1239072 out.go:374] Setting ErrFile to fd 2...
	I1123 08:58:27.517653 1239072 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:27.517979 1239072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:58:27.518277 1239072 out.go:368] Setting JSON to false
	I1123 08:58:27.518307 1239072 mustload.go:66] Loading cluster: default-k8s-diff-port-262764
	I1123 08:58:27.518873 1239072 config.go:182] Loaded profile config "default-k8s-diff-port-262764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:27.519516 1239072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-262764 --format={{.State.Status}}
	I1123 08:58:27.538405 1239072 host.go:66] Checking if "default-k8s-diff-port-262764" exists ...
	I1123 08:58:27.539134 1239072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:58:27.612059 1239072 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-23 08:58:27.602075684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:58:27.612678 1239072 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-262764 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 08:58:27.616088 1239072 out.go:179] * Pausing node default-k8s-diff-port-262764 ... 
	I1123 08:58:27.619107 1239072 host.go:66] Checking if "default-k8s-diff-port-262764" exists ...
	I1123 08:58:27.619572 1239072 ssh_runner.go:195] Run: systemctl --version
	I1123 08:58:27.619630 1239072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-262764
	I1123 08:58:27.637515 1239072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34532 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/default-k8s-diff-port-262764/id_rsa Username:docker}
	I1123 08:58:27.747090 1239072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:58:27.768620 1239072 pause.go:52] kubelet running: true
	I1123 08:58:27.768689 1239072 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:58:28.118345 1239072 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:58:28.118421 1239072 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:58:28.194019 1239072 cri.go:89] found id: "e721f02a88931cc3b946e7a2e214cebe713103c21c3212acd6d50e28153ad017"
	I1123 08:58:28.194039 1239072 cri.go:89] found id: "60cde3c4410b8d5c0f52861bdff9ef2cbfc4e321255b604d3b58a908126f5ad5"
	I1123 08:58:28.194045 1239072 cri.go:89] found id: "2ced1aed02ad0aec279a98d53d3a1bae737d38f73e10188df9c03f82b985a38f"
	I1123 08:58:28.194048 1239072 cri.go:89] found id: "07c4bdb8689c57d9887abb8863977f22eb98f12f2443cd2e95a9a97f5068a9cb"
	I1123 08:58:28.194052 1239072 cri.go:89] found id: "4566a35049addd0b5ec2596842648d3a7c893e58c6ca48d9da4742ea7108e0c6"
	I1123 08:58:28.194056 1239072 cri.go:89] found id: "844d5c6d2fdc6889c36b9911a4a6534e4317818c129eb910010b6c0ffb4f03f7"
	I1123 08:58:28.194059 1239072 cri.go:89] found id: "9183b8d5f0167d65acae545428bcefaad15989e0187470c12fabe000b501d7b6"
	I1123 08:58:28.194062 1239072 cri.go:89] found id: "3c79c59cf7838dc0d18f5c3de6bc6a24338c907a9104ae14d156735b130d2671"
	I1123 08:58:28.194066 1239072 cri.go:89] found id: "69a0aeac491393aeac0ffcc4bc7ed28f76ff736f9b82dde46869747ff492411b"
	I1123 08:58:28.194074 1239072 cri.go:89] found id: "51840ae2191430c19acbea32b7a4ed57fb678cd00bc67fb057f6a3ac7a3f536d"
	I1123 08:58:28.194080 1239072 cri.go:89] found id: "fff43317f3f83aa6e5f347825226e4b1c677289710286c8a61446e42ac8bfdf1"
	I1123 08:58:28.194083 1239072 cri.go:89] found id: ""
	I1123 08:58:28.194131 1239072 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:58:28.214327 1239072 retry.go:31] will retry after 167.83407ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:58:28Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:58:28.382740 1239072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:58:28.396228 1239072 pause.go:52] kubelet running: false
	I1123 08:58:28.396299 1239072 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:58:28.582977 1239072 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:58:28.583080 1239072 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:58:28.667630 1239072 cri.go:89] found id: "e721f02a88931cc3b946e7a2e214cebe713103c21c3212acd6d50e28153ad017"
	I1123 08:58:28.667649 1239072 cri.go:89] found id: "60cde3c4410b8d5c0f52861bdff9ef2cbfc4e321255b604d3b58a908126f5ad5"
	I1123 08:58:28.667655 1239072 cri.go:89] found id: "2ced1aed02ad0aec279a98d53d3a1bae737d38f73e10188df9c03f82b985a38f"
	I1123 08:58:28.667658 1239072 cri.go:89] found id: "07c4bdb8689c57d9887abb8863977f22eb98f12f2443cd2e95a9a97f5068a9cb"
	I1123 08:58:28.667663 1239072 cri.go:89] found id: "4566a35049addd0b5ec2596842648d3a7c893e58c6ca48d9da4742ea7108e0c6"
	I1123 08:58:28.667666 1239072 cri.go:89] found id: "844d5c6d2fdc6889c36b9911a4a6534e4317818c129eb910010b6c0ffb4f03f7"
	I1123 08:58:28.667669 1239072 cri.go:89] found id: "9183b8d5f0167d65acae545428bcefaad15989e0187470c12fabe000b501d7b6"
	I1123 08:58:28.667672 1239072 cri.go:89] found id: "3c79c59cf7838dc0d18f5c3de6bc6a24338c907a9104ae14d156735b130d2671"
	I1123 08:58:28.667675 1239072 cri.go:89] found id: "69a0aeac491393aeac0ffcc4bc7ed28f76ff736f9b82dde46869747ff492411b"
	I1123 08:58:28.667681 1239072 cri.go:89] found id: "51840ae2191430c19acbea32b7a4ed57fb678cd00bc67fb057f6a3ac7a3f536d"
	I1123 08:58:28.667685 1239072 cri.go:89] found id: "fff43317f3f83aa6e5f347825226e4b1c677289710286c8a61446e42ac8bfdf1"
	I1123 08:58:28.667687 1239072 cri.go:89] found id: ""
	I1123 08:58:28.667734 1239072 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:58:28.682474 1239072 retry.go:31] will retry after 326.84302ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:58:28Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:58:29.010074 1239072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:58:29.023412 1239072 pause.go:52] kubelet running: false
	I1123 08:58:29.023498 1239072 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:58:29.195730 1239072 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:58:29.195811 1239072 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:58:29.264786 1239072 cri.go:89] found id: "e721f02a88931cc3b946e7a2e214cebe713103c21c3212acd6d50e28153ad017"
	I1123 08:58:29.264817 1239072 cri.go:89] found id: "60cde3c4410b8d5c0f52861bdff9ef2cbfc4e321255b604d3b58a908126f5ad5"
	I1123 08:58:29.264822 1239072 cri.go:89] found id: "2ced1aed02ad0aec279a98d53d3a1bae737d38f73e10188df9c03f82b985a38f"
	I1123 08:58:29.264825 1239072 cri.go:89] found id: "07c4bdb8689c57d9887abb8863977f22eb98f12f2443cd2e95a9a97f5068a9cb"
	I1123 08:58:29.264829 1239072 cri.go:89] found id: "4566a35049addd0b5ec2596842648d3a7c893e58c6ca48d9da4742ea7108e0c6"
	I1123 08:58:29.264832 1239072 cri.go:89] found id: "844d5c6d2fdc6889c36b9911a4a6534e4317818c129eb910010b6c0ffb4f03f7"
	I1123 08:58:29.264835 1239072 cri.go:89] found id: "9183b8d5f0167d65acae545428bcefaad15989e0187470c12fabe000b501d7b6"
	I1123 08:58:29.264855 1239072 cri.go:89] found id: "3c79c59cf7838dc0d18f5c3de6bc6a24338c907a9104ae14d156735b130d2671"
	I1123 08:58:29.264859 1239072 cri.go:89] found id: "69a0aeac491393aeac0ffcc4bc7ed28f76ff736f9b82dde46869747ff492411b"
	I1123 08:58:29.264866 1239072 cri.go:89] found id: "51840ae2191430c19acbea32b7a4ed57fb678cd00bc67fb057f6a3ac7a3f536d"
	I1123 08:58:29.264884 1239072 cri.go:89] found id: "fff43317f3f83aa6e5f347825226e4b1c677289710286c8a61446e42ac8bfdf1"
	I1123 08:58:29.264896 1239072 cri.go:89] found id: ""
	I1123 08:58:29.264954 1239072 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:58:29.279035 1239072 out.go:203] 
	W1123 08:58:29.282007 1239072 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:58:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:58:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:58:29.282031 1239072 out.go:285] * 
	* 
	W1123 08:58:29.290835 1239072 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:58:29.293797 1239072 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-262764 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-262764
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-262764:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c",
	        "Created": "2025-11-23T08:55:37.40456105Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1234047,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:57:21.13425064Z",
	            "FinishedAt": "2025-11-23T08:57:20.316512428Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/hostname",
	        "HostsPath": "/var/lib/docker/containers/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/hosts",
	        "LogPath": "/var/lib/docker/containers/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c-json.log",
	        "Name": "/default-k8s-diff-port-262764",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-262764:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-262764",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c",
	                "LowerDir": "/var/lib/docker/overlay2/f72313a8ebe5346b2a4f86d480258d4f1e2db66dfe4fbd251eebdfdd3ddbaac3-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f72313a8ebe5346b2a4f86d480258d4f1e2db66dfe4fbd251eebdfdd3ddbaac3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f72313a8ebe5346b2a4f86d480258d4f1e2db66dfe4fbd251eebdfdd3ddbaac3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f72313a8ebe5346b2a4f86d480258d4f1e2db66dfe4fbd251eebdfdd3ddbaac3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-262764",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-262764/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-262764",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-262764",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-262764",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d548a9f3fb2498cfdbf69e85fca871660a97f3d160c5a35f8b76417a01f26ef",
	            "SandboxKey": "/var/run/docker/netns/1d548a9f3fb2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34532"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34533"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34536"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34534"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34535"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-262764": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:0d:aa:c2:5f:02",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a88fa92783a732a39910d80f98969c606d7a2bdb381d5a678aa8210ce1334564",
	                    "EndpointID": "af2d1fbaec7a52a8b350af77093ad4356ff9a8bdf411787e4b3a900f77aa1f9d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-262764",
	                        "c3373e1079a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-262764 -n default-k8s-diff-port-262764
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-262764 -n default-k8s-diff-port-262764: exit status 2 (368.348281ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-262764 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-262764 logs -n 25: (1.367767785s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-194318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194318          │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p cert-options-194318                                                                                                                                                                                                                        │ cert-options-194318          │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-283312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │                     │
	│ stop    │ -p old-k8s-version-283312 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-283312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:54 UTC │
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:55 UTC │
	│ image   │ old-k8s-version-283312 image list --format=json                                                                                                                                                                                               │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ pause   │ -p old-k8s-version-283312 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ delete  │ -p old-k8s-version-283312                                                                                                                                                                                                                     │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ delete  │ -p old-k8s-version-283312                                                                                                                                                                                                                     │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-expiration-322507 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-expiration-322507                                                                                                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-262764 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-262764 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-262764 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable metrics-server -p embed-certs-879861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p embed-certs-879861 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-879861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	│ image   │ default-k8s-diff-port-262764 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ pause   │ -p default-k8s-diff-port-262764 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:58:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:58:01.245850 1236855 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:58:01.245979 1236855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:01.245990 1236855 out.go:374] Setting ErrFile to fd 2...
	I1123 08:58:01.245996 1236855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:01.246338 1236855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:58:01.246812 1236855 out.go:368] Setting JSON to false
	I1123 08:58:01.248147 1236855 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34827,"bootTime":1763853455,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:58:01.248258 1236855 start.go:143] virtualization:  
	I1123 08:58:01.251250 1236855 out.go:179] * [embed-certs-879861] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:58:01.255061 1236855 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:58:01.255151 1236855 notify.go:221] Checking for updates...
	I1123 08:58:01.261015 1236855 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:58:01.263957 1236855 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:58:01.266914 1236855 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:58:01.269875 1236855 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:58:01.272664 1236855 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:58:01.276272 1236855 config.go:182] Loaded profile config "embed-certs-879861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:01.276834 1236855 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:58:01.307315 1236855 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:58:01.307474 1236855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:58:01.383876 1236855 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:58:01.368352796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:58:01.384000 1236855 docker.go:319] overlay module found
	I1123 08:58:01.387281 1236855 out.go:179] * Using the docker driver based on existing profile
	I1123 08:58:01.390201 1236855 start.go:309] selected driver: docker
	I1123 08:58:01.390227 1236855 start.go:927] validating driver "docker" against &{Name:embed-certs-879861 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:58:01.390351 1236855 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:58:01.391211 1236855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:58:01.451681 1236855 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:58:01.440639619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:58:01.452075 1236855 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:58:01.452107 1236855 cni.go:84] Creating CNI manager for ""
	I1123 08:58:01.452165 1236855 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:58:01.452202 1236855 start.go:353] cluster config:
	{Name:embed-certs-879861 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:58:01.457176 1236855 out.go:179] * Starting "embed-certs-879861" primary control-plane node in "embed-certs-879861" cluster
	I1123 08:58:01.460056 1236855 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:58:01.463117 1236855 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:58:01.465947 1236855 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:58:01.466005 1236855 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 08:58:01.466033 1236855 cache.go:65] Caching tarball of preloaded images
	I1123 08:58:01.466158 1236855 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:58:01.466180 1236855 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:58:01.466191 1236855 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:58:01.466501 1236855 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/config.json ...
	I1123 08:58:01.488647 1236855 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:58:01.488668 1236855 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:58:01.488688 1236855 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:58:01.488718 1236855 start.go:360] acquireMachinesLock for embed-certs-879861: {Name:mkc426f5135ca68e4cb995276c3947d42bb1e43d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:01.488775 1236855 start.go:364] duration metric: took 34.641µs to acquireMachinesLock for "embed-certs-879861"
	I1123 08:58:01.488798 1236855 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:58:01.488804 1236855 fix.go:54] fixHost starting: 
	I1123 08:58:01.489067 1236855 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:58:01.511277 1236855 fix.go:112] recreateIfNeeded on embed-certs-879861: state=Stopped err=<nil>
	W1123 08:58:01.511312 1236855 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 08:58:02.534240 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	W1123 08:58:04.536216 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	I1123 08:58:01.514461 1236855 out.go:252] * Restarting existing docker container for "embed-certs-879861" ...
	I1123 08:58:01.514552 1236855 cli_runner.go:164] Run: docker start embed-certs-879861
	I1123 08:58:01.813326 1236855 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:58:01.834941 1236855 kic.go:430] container "embed-certs-879861" state is running.
	I1123 08:58:01.835385 1236855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879861
	I1123 08:58:01.858469 1236855 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/config.json ...
	I1123 08:58:01.858700 1236855 machine.go:94] provisionDockerMachine start ...
	I1123 08:58:01.858769 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:01.886289 1236855 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:01.886651 1236855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1123 08:58:01.886661 1236855 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:58:01.887689 1236855 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:58:05.038606 1236855 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-879861
	
	I1123 08:58:05.038626 1236855 ubuntu.go:182] provisioning hostname "embed-certs-879861"
	I1123 08:58:05.038699 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:05.055983 1236855 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:05.056289 1236855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1123 08:58:05.056306 1236855 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-879861 && echo "embed-certs-879861" | sudo tee /etc/hostname
	I1123 08:58:05.221105 1236855 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-879861
	
	I1123 08:58:05.221245 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:05.239464 1236855 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:05.239771 1236855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1123 08:58:05.239786 1236855 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-879861' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-879861/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-879861' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:58:05.391232 1236855 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:58:05.391255 1236855 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:58:05.391287 1236855 ubuntu.go:190] setting up certificates
	I1123 08:58:05.391297 1236855 provision.go:84] configureAuth start
	I1123 08:58:05.391359 1236855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879861
	I1123 08:58:05.408707 1236855 provision.go:143] copyHostCerts
	I1123 08:58:05.408777 1236855 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:58:05.408795 1236855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:58:05.408872 1236855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:58:05.408978 1236855 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:58:05.408991 1236855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:58:05.409019 1236855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:58:05.409077 1236855 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:58:05.409087 1236855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:58:05.409109 1236855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:58:05.409160 1236855 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.embed-certs-879861 san=[127.0.0.1 192.168.76.2 embed-certs-879861 localhost minikube]
	I1123 08:58:05.621323 1236855 provision.go:177] copyRemoteCerts
	I1123 08:58:05.621399 1236855 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:58:05.621448 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:05.639150 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:05.751078 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:58:05.772215 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 08:58:05.790833 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:58:05.812160 1236855 provision.go:87] duration metric: took 420.838655ms to configureAuth
	I1123 08:58:05.812243 1236855 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:58:05.812464 1236855 config.go:182] Loaded profile config "embed-certs-879861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:05.812576 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:05.829810 1236855 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:05.830161 1236855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1123 08:58:05.830176 1236855 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:58:06.221677 1236855 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:58:06.221742 1236855 machine.go:97] duration metric: took 4.363031457s to provisionDockerMachine
	I1123 08:58:06.221769 1236855 start.go:293] postStartSetup for "embed-certs-879861" (driver="docker")
	I1123 08:58:06.221794 1236855 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:58:06.221871 1236855 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:58:06.221926 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:06.242878 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:06.351476 1236855 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:58:06.354721 1236855 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:58:06.354747 1236855 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:58:06.354758 1236855 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:58:06.354813 1236855 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:58:06.354891 1236855 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:58:06.354987 1236855 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:58:06.362381 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:58:06.388185 1236855 start.go:296] duration metric: took 166.386877ms for postStartSetup
	I1123 08:58:06.388307 1236855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:58:06.388374 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:06.404608 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:06.504077 1236855 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:58:06.509104 1236855 fix.go:56] duration metric: took 5.020293186s for fixHost
	I1123 08:58:06.509132 1236855 start.go:83] releasing machines lock for "embed-certs-879861", held for 5.020344302s
	I1123 08:58:06.509220 1236855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879861
	I1123 08:58:06.526830 1236855 ssh_runner.go:195] Run: cat /version.json
	I1123 08:58:06.526886 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:06.527205 1236855 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:58:06.527279 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:06.546668 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:06.556863 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:06.654816 1236855 ssh_runner.go:195] Run: systemctl --version
	I1123 08:58:06.762890 1236855 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:58:06.806604 1236855 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:58:06.811088 1236855 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:58:06.811165 1236855 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:58:06.820122 1236855 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:58:06.820191 1236855 start.go:496] detecting cgroup driver to use...
	I1123 08:58:06.820237 1236855 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:58:06.820334 1236855 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:58:06.835904 1236855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:58:06.849126 1236855 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:58:06.849233 1236855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:58:06.865607 1236855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:58:06.879312 1236855 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:58:06.999841 1236855 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:58:07.128936 1236855 docker.go:234] disabling docker service ...
	I1123 08:58:07.129058 1236855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:58:07.146044 1236855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:58:07.159230 1236855 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:58:07.274713 1236855 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:58:07.413016 1236855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:58:07.426892 1236855 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:58:07.440904 1236855 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:58:07.440984 1236855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.449761 1236855 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:58:07.449852 1236855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.458645 1236855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.467768 1236855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.477440 1236855 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:58:07.485588 1236855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.494867 1236855 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.504434 1236855 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.513107 1236855 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:58:07.521654 1236855 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:58:07.529449 1236855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:07.650264 1236855 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:58:07.843010 1236855 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:58:07.843123 1236855 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:58:07.846968 1236855 start.go:564] Will wait 60s for crictl version
	I1123 08:58:07.847063 1236855 ssh_runner.go:195] Run: which crictl
	I1123 08:58:07.850701 1236855 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:58:07.879793 1236855 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:58:07.879889 1236855 ssh_runner.go:195] Run: crio --version
	I1123 08:58:07.913507 1236855 ssh_runner.go:195] Run: crio --version
	I1123 08:58:07.956938 1236855 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:58:07.959814 1236855 cli_runner.go:164] Run: docker network inspect embed-certs-879861 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:58:07.976441 1236855 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:58:07.980347 1236855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:58:07.991673 1236855 kubeadm.go:884] updating cluster {Name:embed-certs-879861 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:58:07.991815 1236855 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:58:07.991874 1236855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:58:08.028154 1236855 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:58:08.028179 1236855 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:58:08.028236 1236855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:58:08.057403 1236855 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:58:08.057425 1236855 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:58:08.057433 1236855 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 08:58:08.057537 1236855 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-879861 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:58:08.057624 1236855 ssh_runner.go:195] Run: crio config
	I1123 08:58:08.138866 1236855 cni.go:84] Creating CNI manager for ""
	I1123 08:58:08.138890 1236855 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:58:08.138917 1236855 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:58:08.138948 1236855 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-879861 NodeName:embed-certs-879861 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:58:08.139096 1236855 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-879861"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:58:08.139207 1236855 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:58:08.147604 1236855 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:58:08.147670 1236855 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:58:08.155096 1236855 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 08:58:08.167769 1236855 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:58:08.180984 1236855 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1123 08:58:08.193661 1236855 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:58:08.197178 1236855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:58:08.206763 1236855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:08.326188 1236855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:58:08.343659 1236855 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861 for IP: 192.168.76.2
	I1123 08:58:08.343720 1236855 certs.go:195] generating shared ca certs ...
	I1123 08:58:08.343753 1236855 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:08.343896 1236855 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:58:08.343986 1236855 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:58:08.344012 1236855 certs.go:257] generating profile certs ...
	I1123 08:58:08.344120 1236855 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/client.key
	I1123 08:58:08.344216 1236855 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.key.a22c785f
	I1123 08:58:08.344285 1236855 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.key
	I1123 08:58:08.344422 1236855 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:58:08.344484 1236855 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:58:08.344507 1236855 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:58:08.344580 1236855 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:58:08.344632 1236855 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:58:08.344692 1236855 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:58:08.344778 1236855 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:58:08.345441 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:58:08.370014 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:58:08.392743 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:58:08.413851 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:58:08.434720 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:58:08.456111 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:58:08.476348 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:58:08.497193 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:58:08.520486 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 08:58:08.553773 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:58:08.573662 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:58:08.597998 1236855 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:58:08.621056 1236855 ssh_runner.go:195] Run: openssl version
	I1123 08:58:08.634376 1236855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:58:08.645138 1236855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:58:08.649237 1236855 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:58:08.649349 1236855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:58:08.698601 1236855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:58:08.706766 1236855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:58:08.714882 1236855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:58:08.718890 1236855 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:58:08.718958 1236855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:58:08.763722 1236855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:58:08.771420 1236855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:58:08.779364 1236855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:58:08.782845 1236855 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:58:08.782905 1236855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:58:08.823865 1236855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:58:08.832290 1236855 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:58:08.836197 1236855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:58:08.877335 1236855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:58:08.921282 1236855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:58:08.969192 1236855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:58:09.010067 1236855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:58:09.053481 1236855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:58:09.094742 1236855 kubeadm.go:401] StartCluster: {Name:embed-certs-879861 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:58:09.094843 1236855 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:58:09.094908 1236855 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:58:09.130267 1236855 cri.go:89] found id: ""
	I1123 08:58:09.130339 1236855 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:58:09.138625 1236855 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:58:09.138650 1236855 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:58:09.138698 1236855 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:58:09.146650 1236855 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:58:09.147281 1236855 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-879861" does not appear in /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:58:09.147606 1236855 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-1041293/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-879861" cluster setting kubeconfig missing "embed-certs-879861" context setting]
	I1123 08:58:09.148131 1236855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:09.149462 1236855 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:58:09.166519 1236855 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 08:58:09.166562 1236855 kubeadm.go:602] duration metric: took 27.906542ms to restartPrimaryControlPlane
	I1123 08:58:09.166595 1236855 kubeadm.go:403] duration metric: took 71.855959ms to StartCluster
	I1123 08:58:09.166633 1236855 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:09.166717 1236855 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:58:09.168176 1236855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:09.168447 1236855 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:58:09.168743 1236855 config.go:182] Loaded profile config "embed-certs-879861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:09.168949 1236855 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:58:09.169037 1236855 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-879861"
	I1123 08:58:09.169088 1236855 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-879861"
	W1123 08:58:09.169102 1236855 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:58:09.169119 1236855 addons.go:70] Setting default-storageclass=true in profile "embed-certs-879861"
	I1123 08:58:09.169148 1236855 host.go:66] Checking if "embed-certs-879861" exists ...
	I1123 08:58:09.169087 1236855 addons.go:70] Setting dashboard=true in profile "embed-certs-879861"
	I1123 08:58:09.169217 1236855 addons.go:239] Setting addon dashboard=true in "embed-certs-879861"
	W1123 08:58:09.169245 1236855 addons.go:248] addon dashboard should already be in state true
	I1123 08:58:09.169299 1236855 host.go:66] Checking if "embed-certs-879861" exists ...
	I1123 08:58:09.169755 1236855 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:58:09.169926 1236855 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:58:09.169152 1236855 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-879861"
	I1123 08:58:09.171828 1236855 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:58:09.175062 1236855 out.go:179] * Verifying Kubernetes components...
	I1123 08:58:09.179710 1236855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:09.216277 1236855 addons.go:239] Setting addon default-storageclass=true in "embed-certs-879861"
	W1123 08:58:09.216304 1236855 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:58:09.216335 1236855 host.go:66] Checking if "embed-certs-879861" exists ...
	I1123 08:58:09.217066 1236855 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:58:09.267040 1236855 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:09.267286 1236855 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 08:58:09.276473 1236855 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1123 08:58:07.036839 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	W1123 08:58:09.040384 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	I1123 08:58:09.276544 1236855 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:58:09.276558 1236855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:58:09.276637 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:09.279833 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:58:09.279864 1236855 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:58:09.279939 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:09.312028 1236855 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:58:09.312050 1236855 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:58:09.312107 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:09.339339 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:09.351418 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:09.366166 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:09.580555 1236855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:58:09.612558 1236855 node_ready.go:35] waiting up to 6m0s for node "embed-certs-879861" to be "Ready" ...
	I1123 08:58:09.680479 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:58:09.680553 1236855 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:58:09.686657 1236855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:58:09.692710 1236855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:58:09.748095 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:58:09.748177 1236855 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:58:09.823331 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:58:09.823411 1236855 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:58:09.903822 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:58:09.903893 1236855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:58:09.923916 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:58:09.923995 1236855 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:58:09.986951 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:58:09.987024 1236855 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:58:10.016449 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:58:10.016530 1236855 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:58:10.051552 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:58:10.051631 1236855 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:58:10.073996 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:58:10.074093 1236855 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:58:10.096274 1236855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1123 08:58:11.540595 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	I1123 08:58:13.039861 1233920 pod_ready.go:94] pod "coredns-66bc5c9577-mmrrf" is "Ready"
	I1123 08:58:13.039894 1233920 pod_ready.go:86] duration metric: took 37.510553338s for pod "coredns-66bc5c9577-mmrrf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.043255 1233920 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.048606 1233920 pod_ready.go:94] pod "etcd-default-k8s-diff-port-262764" is "Ready"
	I1123 08:58:13.048635 1233920 pod_ready.go:86] duration metric: took 5.351629ms for pod "etcd-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.051202 1233920 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.060208 1233920 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-262764" is "Ready"
	I1123 08:58:13.060239 1233920 pod_ready.go:86] duration metric: took 9.009495ms for pod "kube-apiserver-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.063047 1233920 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.233326 1233920 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-262764" is "Ready"
	I1123 08:58:13.233368 1233920 pod_ready.go:86] duration metric: took 170.293695ms for pod "kube-controller-manager-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.432475 1233920 pod_ready.go:83] waiting for pod "kube-proxy-9thkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.832840 1233920 pod_ready.go:94] pod "kube-proxy-9thkr" is "Ready"
	I1123 08:58:13.832872 1233920 pod_ready.go:86] duration metric: took 400.368296ms for pod "kube-proxy-9thkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:14.033626 1233920 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:14.433115 1233920 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-262764" is "Ready"
	I1123 08:58:14.433146 1233920 pod_ready.go:86] duration metric: took 399.488197ms for pod "kube-scheduler-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:14.433159 1233920 pod_ready.go:40] duration metric: took 38.973132264s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:58:14.523412 1233920 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:58:14.526699 1233920 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-262764" cluster and "default" namespace by default
	I1123 08:58:13.939474 1236855 node_ready.go:49] node "embed-certs-879861" is "Ready"
	I1123 08:58:13.939502 1236855 node_ready.go:38] duration metric: took 4.326856539s for node "embed-certs-879861" to be "Ready" ...
	I1123 08:58:13.939514 1236855 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:58:13.939571 1236855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:58:15.830486 1236855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.143757563s)
	I1123 08:58:15.830525 1236855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.137733296s)
	I1123 08:58:15.888901 1236855 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.949307444s)
	I1123 08:58:15.888935 1236855 api_server.go:72] duration metric: took 6.720342132s to wait for apiserver process to appear ...
	I1123 08:58:15.888941 1236855 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:58:15.888959 1236855 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:58:15.889767 1236855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.793401902s)
	I1123 08:58:15.892771 1236855 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-879861 addons enable metrics-server
	
	I1123 08:58:15.896516 1236855 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1123 08:58:15.899415 1236855 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 08:58:15.900542 1236855 api_server.go:141] control plane version: v1.34.1
	I1123 08:58:15.900564 1236855 api_server.go:131] duration metric: took 11.61679ms to wait for apiserver health ...
	I1123 08:58:15.900573 1236855 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:58:15.901487 1236855 addons.go:530] duration metric: took 6.732536342s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1123 08:58:15.904191 1236855 system_pods.go:59] 8 kube-system pods found
	I1123 08:58:15.904230 1236855 system_pods.go:61] "coredns-66bc5c9577-r5lt5" [c470da65-70be-4126-90eb-0434f6668546] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:58:15.904240 1236855 system_pods.go:61] "etcd-embed-certs-879861" [bfcc5c7b-69bf-4a5e-a473-ec3b9d4c1a98] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:58:15.904246 1236855 system_pods.go:61] "kindnet-f6j8g" [973f09b1-28dd-40ea-9180-85020f65a04e] Running
	I1123 08:58:15.904253 1236855 system_pods.go:61] "kube-apiserver-embed-certs-879861" [d3d9369f-cc37-484a-a5b9-bbe97c1b1a51] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:58:15.904268 1236855 system_pods.go:61] "kube-controller-manager-embed-certs-879861" [02779370-efc5-438a-a94c-4fc12286c2fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:58:15.904281 1236855 system_pods.go:61] "kube-proxy-bf5ck" [37c2f985-65de-4d46-955d-3767fe0f32a2] Running
	I1123 08:58:15.904288 1236855 system_pods.go:61] "kube-scheduler-embed-certs-879861" [dab432a6-c8f8-4282-b842-bf07ca17e9e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:58:15.904293 1236855 system_pods.go:61] "storage-provisioner" [cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89] Running
	I1123 08:58:15.904302 1236855 system_pods.go:74] duration metric: took 3.72367ms to wait for pod list to return data ...
	I1123 08:58:15.904309 1236855 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:58:15.906630 1236855 default_sa.go:45] found service account: "default"
	I1123 08:58:15.906650 1236855 default_sa.go:55] duration metric: took 2.332841ms for default service account to be created ...
	I1123 08:58:15.906658 1236855 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:58:15.909581 1236855 system_pods.go:86] 8 kube-system pods found
	I1123 08:58:15.909613 1236855 system_pods.go:89] "coredns-66bc5c9577-r5lt5" [c470da65-70be-4126-90eb-0434f6668546] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:58:15.909631 1236855 system_pods.go:89] "etcd-embed-certs-879861" [bfcc5c7b-69bf-4a5e-a473-ec3b9d4c1a98] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:58:15.909638 1236855 system_pods.go:89] "kindnet-f6j8g" [973f09b1-28dd-40ea-9180-85020f65a04e] Running
	I1123 08:58:15.909644 1236855 system_pods.go:89] "kube-apiserver-embed-certs-879861" [d3d9369f-cc37-484a-a5b9-bbe97c1b1a51] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:58:15.909657 1236855 system_pods.go:89] "kube-controller-manager-embed-certs-879861" [02779370-efc5-438a-a94c-4fc12286c2fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:58:15.909667 1236855 system_pods.go:89] "kube-proxy-bf5ck" [37c2f985-65de-4d46-955d-3767fe0f32a2] Running
	I1123 08:58:15.909674 1236855 system_pods.go:89] "kube-scheduler-embed-certs-879861" [dab432a6-c8f8-4282-b842-bf07ca17e9e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:58:15.909678 1236855 system_pods.go:89] "storage-provisioner" [cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89] Running
	I1123 08:58:15.909691 1236855 system_pods.go:126] duration metric: took 3.027837ms to wait for k8s-apps to be running ...
	I1123 08:58:15.909705 1236855 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:58:15.909764 1236855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:58:15.925091 1236855 system_svc.go:56] duration metric: took 15.383003ms WaitForService to wait for kubelet
	I1123 08:58:15.925132 1236855 kubeadm.go:587] duration metric: took 6.756537656s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:58:15.925151 1236855 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:58:15.930190 1236855 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:58:15.930221 1236855 node_conditions.go:123] node cpu capacity is 2
	I1123 08:58:15.930235 1236855 node_conditions.go:105] duration metric: took 5.078471ms to run NodePressure ...
	I1123 08:58:15.930248 1236855 start.go:242] waiting for startup goroutines ...
	I1123 08:58:15.930255 1236855 start.go:247] waiting for cluster config update ...
	I1123 08:58:15.930266 1236855 start.go:256] writing updated cluster config ...
	I1123 08:58:15.930551 1236855 ssh_runner.go:195] Run: rm -f paused
	I1123 08:58:15.942034 1236855 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:58:15.945976 1236855 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r5lt5" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:58:17.978850 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:20.450934 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:22.452041 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:24.452810 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.881858272Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9c7a3923-2ac8-4e5d-84e3-5dc993ff8e2b name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.883280066Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e1c6c386-c926-41b1-934e-bb7cb66ac785 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.884429695Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b/dashboard-metrics-scraper" id=662b2c66-c07b-4977-87e7-40262275927a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.884535432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.913059819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.913892542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.952831489Z" level=info msg="Created container 51840ae2191430c19acbea32b7a4ed57fb678cd00bc67fb057f6a3ac7a3f536d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b/dashboard-metrics-scraper" id=662b2c66-c07b-4977-87e7-40262275927a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.95425138Z" level=info msg="Starting container: 51840ae2191430c19acbea32b7a4ed57fb678cd00bc67fb057f6a3ac7a3f536d" id=0a672906-750a-4d7a-9a44-bffe7038ed23 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.957758039Z" level=info msg="Started container" PID=1647 containerID=51840ae2191430c19acbea32b7a4ed57fb678cd00bc67fb057f6a3ac7a3f536d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b/dashboard-metrics-scraper id=0a672906-750a-4d7a-9a44-bffe7038ed23 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9bec0c7b6fad0e42bb807156ce5a5287e45ec5f8ff702b84cde2cd4d992c8b30
	Nov 23 08:58:09 default-k8s-diff-port-262764 conmon[1645]: conmon 51840ae2191430c19acb <ninfo>: container 1647 exited with status 1
	Nov 23 08:58:10 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:10.157344223Z" level=info msg="Removing container: d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478" id=b4d8543e-60a6-4105-bf46-345be40910d5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:58:10 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:10.171267739Z" level=info msg="Error loading conmon cgroup of container d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478: cgroup deleted" id=b4d8543e-60a6-4105-bf46-345be40910d5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:58:10 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:10.175615064Z" level=info msg="Removed container d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b/dashboard-metrics-scraper" id=b4d8543e-60a6-4105-bf46-345be40910d5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.738515688Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.751750082Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.751788654Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.751813014Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.761805809Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.761980122Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.762074995Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.767510847Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.767661473Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.76774193Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.777993385Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.778035328Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	51840ae219143       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   9bec0c7b6fad0       dashboard-metrics-scraper-6ffb444bf9-vqt6b             kubernetes-dashboard
	e721f02a88931       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   b6730cf8dc51b       storage-provisioner                                    kube-system
	fff43317f3f83       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   abfb5c6a0b6a7       kubernetes-dashboard-855c9754f9-pcsrh                  kubernetes-dashboard
	60cde3c4410b8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   6699b18b20982       coredns-66bc5c9577-mmrrf                               kube-system
	2ced1aed02ad0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   6ab993bcc410b       kube-proxy-9thkr                                       kube-system
	13abf7f01ac25       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   059504158c0e5       busybox                                                default
	07c4bdb8689c5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   97e0cfdd953a4       kindnet-xsm2q                                          kube-system
	4566a35049add       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   b6730cf8dc51b       storage-provisioner                                    kube-system
	844d5c6d2fdc6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   1907c35aff34e       etcd-default-k8s-diff-port-262764                      kube-system
	9183b8d5f0167       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   f9f716368826c       kube-scheduler-default-k8s-diff-port-262764            kube-system
	3c79c59cf7838       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   d6c1cdafb98ee       kube-apiserver-default-k8s-diff-port-262764            kube-system
	69a0aeac49139       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   bf751ca5cdd3d       kube-controller-manager-default-k8s-diff-port-262764   kube-system
	
	
	==> coredns [60cde3c4410b8d5c0f52861bdff9ef2cbfc4e321255b604d3b58a908126f5ad5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59925 - 34987 "HINFO IN 99384716890802852.7991162138811048242. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.013903013s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-262764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-262764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=default-k8s-diff-port-262764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_56_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:56:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-262764
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:58:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:58:04 +0000   Sun, 23 Nov 2025 08:55:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:58:04 +0000   Sun, 23 Nov 2025 08:55:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:58:04 +0000   Sun, 23 Nov 2025 08:55:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:58:04 +0000   Sun, 23 Nov 2025 08:56:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-262764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                9167756b-ee2d-4d27-ae18-a988612654cb
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-mmrrf                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-default-k8s-diff-port-262764                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-xsm2q                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-262764             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-262764    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-9thkr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-262764             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vqt6b              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pcsrh                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s                  node-controller  Node default-k8s-diff-port-262764 event: Registered Node default-k8s-diff-port-262764 in Controller
	  Normal   NodeReady                97s                    kubelet          Node default-k8s-diff-port-262764 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node default-k8s-diff-port-262764 event: Registered Node default-k8s-diff-port-262764 in Controller
	
	
	==> dmesg <==
	[Nov23 08:35] overlayfs: idmapped layers are currently not supported
	[Nov23 08:36] overlayfs: idmapped layers are currently not supported
	[Nov23 08:37] overlayfs: idmapped layers are currently not supported
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	[Nov23 08:55] overlayfs: idmapped layers are currently not supported
	[Nov23 08:56] overlayfs: idmapped layers are currently not supported
	[Nov23 08:57] overlayfs: idmapped layers are currently not supported
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [844d5c6d2fdc6889c36b9911a4a6534e4317818c129eb910010b6c0ffb4f03f7] <==
	{"level":"warn","ts":"2025-11-23T08:57:31.672851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.709134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.750900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.799964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.851857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.885826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.923504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.952000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.964813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.984649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.012058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.024051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.056922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.075701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.102803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.130030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.146048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.174115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.227042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.255709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.285644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.321341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.345235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.361620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.451864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60942","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:58:30 up  9:40,  0 user,  load average: 4.07, 3.30, 2.74
	Linux default-k8s-diff-port-262764 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [07c4bdb8689c57d9887abb8863977f22eb98f12f2443cd2e95a9a97f5068a9cb] <==
	I1123 08:57:34.540510       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:57:34.540683       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:57:34.540800       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:57:34.540812       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:57:34.540824       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:57:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:57:34.747433       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:57:34.747460       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:57:34.747470       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:57:34.747587       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:58:04.747806       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:58:04.747806       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:58:04.747926       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 08:58:04.748061       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 08:58:06.348323       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:58:06.348367       1 metrics.go:72] Registering metrics
	I1123 08:58:06.348421       1 controller.go:711] "Syncing nftables rules"
	I1123 08:58:14.738213       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:58:14.738265       1 main.go:301] handling current node
	I1123 08:58:24.743255       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:58:24.743292       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3c79c59cf7838dc0d18f5c3de6bc6a24338c907a9104ae14d156735b130d2671] <==
	I1123 08:57:33.778774       1 aggregator.go:171] initial CRD sync complete...
	I1123 08:57:33.778786       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:57:33.778792       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:57:33.778799       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:57:33.778969       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:57:33.802175       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 08:57:33.802742       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:57:33.844807       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:57:33.862726       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 08:57:33.862818       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 08:57:33.862825       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 08:57:33.869707       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:57:33.884605       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1123 08:57:33.892898       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:57:34.059494       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:57:34.478233       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:57:34.925040       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:57:34.990884       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:57:35.036865       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:57:35.051896       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:57:35.289314       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.91.28"}
	I1123 08:57:35.306506       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.197.117"}
	I1123 08:57:37.439928       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:57:37.489812       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:57:37.601041       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [69a0aeac491393aeac0ffcc4bc7ed28f76ff736f9b82dde46869747ff492411b] <==
	I1123 08:57:37.005317       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:57:37.006875       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:57:37.011091       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:57:37.015791       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:57:37.016509       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:57:37.024703       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:57:37.030017       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:57:37.030129       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:57:37.030143       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:57:37.030373       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:57:37.030442       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:57:37.030475       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:57:37.030504       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:57:37.030579       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:57:37.030669       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:57:37.030760       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-262764"
	I1123 08:57:37.030833       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 08:57:37.031421       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:57:37.033883       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:57:37.033960       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:57:37.036049       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:57:37.036119       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:57:37.039069       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:57:37.044196       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:57:37.050660       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [2ced1aed02ad0aec279a98d53d3a1bae737d38f73e10188df9c03f82b985a38f] <==
	I1123 08:57:34.996032       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:57:35.269412       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:57:35.369677       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:57:35.369713       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:57:35.369797       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:57:35.422946       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:57:35.423057       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:57:35.430275       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:57:35.430611       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:57:35.431377       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:57:35.432944       1 config.go:200] "Starting service config controller"
	I1123 08:57:35.432997       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:57:35.433042       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:57:35.433069       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:57:35.433776       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:57:35.433817       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:57:35.434465       1 config.go:309] "Starting node config controller"
	I1123 08:57:35.435037       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:57:35.435085       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:57:35.533123       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:57:35.534370       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:57:35.534373       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9183b8d5f0167d65acae545428bcefaad15989e0187470c12fabe000b501d7b6] <==
	I1123 08:57:31.637059       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:57:34.388581       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:57:34.388608       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:57:34.398667       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:57:34.398746       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:57:34.398762       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:57:34.398787       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:57:34.407153       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:57:34.407168       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:57:34.421189       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:57:34.421213       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:57:34.499884       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 08:57:34.507738       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:57:34.523332       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:57:34 default-k8s-diff-port-262764 kubelet[784]: W1123 08:57:34.333440     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/crio-059504158c0e54c18c6348a98e608dce0d0368ef955e6b0b418e3c5e9b722c0f WatchSource:0}: Error finding container 059504158c0e54c18c6348a98e608dce0d0368ef955e6b0b418e3c5e9b722c0f: Status 404 returned error can't find the container with id 059504158c0e54c18c6348a98e608dce0d0368ef955e6b0b418e3c5e9b722c0f
	Nov 23 08:57:34 default-k8s-diff-port-262764 kubelet[784]: W1123 08:57:34.497810     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/crio-6ab993bcc410bbe38ac668d3f3dee3c9518e0a75064a8e0ef740359927ac40e9 WatchSource:0}: Error finding container 6ab993bcc410bbe38ac668d3f3dee3c9518e0a75064a8e0ef740359927ac40e9: Status 404 returned error can't find the container with id 6ab993bcc410bbe38ac668d3f3dee3c9518e0a75064a8e0ef740359927ac40e9
	Nov 23 08:57:37 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:37.680030     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/30d0a90d-21de-40ab-802a-ef4067be718b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-pcsrh\" (UID: \"30d0a90d-21de-40ab-802a-ef4067be718b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pcsrh"
	Nov 23 08:57:37 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:37.680085     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvlr\" (UniqueName: \"kubernetes.io/projected/f7311d48-111b-4b4a-adce-2e7dab6310d3-kube-api-access-txvlr\") pod \"dashboard-metrics-scraper-6ffb444bf9-vqt6b\" (UID: \"f7311d48-111b-4b4a-adce-2e7dab6310d3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b"
	Nov 23 08:57:37 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:37.680117     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f7311d48-111b-4b4a-adce-2e7dab6310d3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vqt6b\" (UID: \"f7311d48-111b-4b4a-adce-2e7dab6310d3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b"
	Nov 23 08:57:37 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:37.680143     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb5hg\" (UniqueName: \"kubernetes.io/projected/30d0a90d-21de-40ab-802a-ef4067be718b-kube-api-access-vb5hg\") pod \"kubernetes-dashboard-855c9754f9-pcsrh\" (UID: \"30d0a90d-21de-40ab-802a-ef4067be718b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pcsrh"
	Nov 23 08:57:37 default-k8s-diff-port-262764 kubelet[784]: W1123 08:57:37.957062     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/crio-9bec0c7b6fad0e42bb807156ce5a5287e45ec5f8ff702b84cde2cd4d992c8b30 WatchSource:0}: Error finding container 9bec0c7b6fad0e42bb807156ce5a5287e45ec5f8ff702b84cde2cd4d992c8b30: Status 404 returned error can't find the container with id 9bec0c7b6fad0e42bb807156ce5a5287e45ec5f8ff702b84cde2cd4d992c8b30
	Nov 23 08:57:42 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:42.695315     784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 08:57:49 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:49.088729     784 scope.go:117] "RemoveContainer" containerID="b523f231039c5ec4314304cd5bafa1975ba4b318b1e289736564c3c8cde28e3d"
	Nov 23 08:57:49 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:49.139245     784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pcsrh" podStartSLOduration=7.312493387 podStartE2EDuration="12.131715423s" podCreationTimestamp="2025-11-23 08:57:37 +0000 UTC" firstStartedPulling="2025-11-23 08:57:37.941323684 +0000 UTC m=+10.217884598" lastFinishedPulling="2025-11-23 08:57:42.76054572 +0000 UTC m=+15.037106634" observedRunningTime="2025-11-23 08:57:43.07638325 +0000 UTC m=+15.352944164" watchObservedRunningTime="2025-11-23 08:57:49.131715423 +0000 UTC m=+21.408276337"
	Nov 23 08:57:50 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:50.093964     784 scope.go:117] "RemoveContainer" containerID="b523f231039c5ec4314304cd5bafa1975ba4b318b1e289736564c3c8cde28e3d"
	Nov 23 08:57:50 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:50.094347     784 scope.go:117] "RemoveContainer" containerID="d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478"
	Nov 23 08:57:50 default-k8s-diff-port-262764 kubelet[784]: E1123 08:57:50.094503     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqt6b_kubernetes-dashboard(f7311d48-111b-4b4a-adce-2e7dab6310d3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b" podUID="f7311d48-111b-4b4a-adce-2e7dab6310d3"
	Nov 23 08:57:57 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:57.900092     784 scope.go:117] "RemoveContainer" containerID="d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478"
	Nov 23 08:57:57 default-k8s-diff-port-262764 kubelet[784]: E1123 08:57:57.900717     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqt6b_kubernetes-dashboard(f7311d48-111b-4b4a-adce-2e7dab6310d3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b" podUID="f7311d48-111b-4b4a-adce-2e7dab6310d3"
	Nov 23 08:58:05 default-k8s-diff-port-262764 kubelet[784]: I1123 08:58:05.138241     784 scope.go:117] "RemoveContainer" containerID="4566a35049addd0b5ec2596842648d3a7c893e58c6ca48d9da4742ea7108e0c6"
	Nov 23 08:58:09 default-k8s-diff-port-262764 kubelet[784]: I1123 08:58:09.881228     784 scope.go:117] "RemoveContainer" containerID="d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478"
	Nov 23 08:58:10 default-k8s-diff-port-262764 kubelet[784]: I1123 08:58:10.155641     784 scope.go:117] "RemoveContainer" containerID="d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478"
	Nov 23 08:58:11 default-k8s-diff-port-262764 kubelet[784]: I1123 08:58:11.160079     784 scope.go:117] "RemoveContainer" containerID="51840ae2191430c19acbea32b7a4ed57fb678cd00bc67fb057f6a3ac7a3f536d"
	Nov 23 08:58:11 default-k8s-diff-port-262764 kubelet[784]: E1123 08:58:11.160245     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqt6b_kubernetes-dashboard(f7311d48-111b-4b4a-adce-2e7dab6310d3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b" podUID="f7311d48-111b-4b4a-adce-2e7dab6310d3"
	Nov 23 08:58:17 default-k8s-diff-port-262764 kubelet[784]: I1123 08:58:17.900027     784 scope.go:117] "RemoveContainer" containerID="51840ae2191430c19acbea32b7a4ed57fb678cd00bc67fb057f6a3ac7a3f536d"
	Nov 23 08:58:17 default-k8s-diff-port-262764 kubelet[784]: E1123 08:58:17.900620     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqt6b_kubernetes-dashboard(f7311d48-111b-4b4a-adce-2e7dab6310d3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b" podUID="f7311d48-111b-4b4a-adce-2e7dab6310d3"
	Nov 23 08:58:28 default-k8s-diff-port-262764 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:58:28 default-k8s-diff-port-262764 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:58:28 default-k8s-diff-port-262764 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [fff43317f3f83aa6e5f347825226e4b1c677289710286c8a61446e42ac8bfdf1] <==
	2025/11/23 08:57:42 Starting overwatch
	2025/11/23 08:57:42 Using namespace: kubernetes-dashboard
	2025/11/23 08:57:42 Using in-cluster config to connect to apiserver
	2025/11/23 08:57:42 Using secret token for csrf signing
	2025/11/23 08:57:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:57:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:57:42 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 08:57:42 Generating JWE encryption key
	2025/11/23 08:57:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:57:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:57:43 Initializing JWE encryption key from synchronized object
	2025/11/23 08:57:43 Creating in-cluster Sidecar client
	2025/11/23 08:57:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:57:43 Serving insecurely on HTTP port: 9090
	2025/11/23 08:58:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4566a35049addd0b5ec2596842648d3a7c893e58c6ca48d9da4742ea7108e0c6] <==
	I1123 08:57:34.904934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:58:04.906901       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e721f02a88931cc3b946e7a2e214cebe713103c21c3212acd6d50e28153ad017] <==
	I1123 08:58:05.194099       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:58:05.208840       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:58:05.208989       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:58:05.218361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:08.673901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:12.947815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:16.545824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:19.598822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:22.621357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:22.626820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:58:22.627054       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:58:22.627286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-262764_7298b6dd-6745-4a59-952c-997c92de40b4!
	I1123 08:58:22.628225       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"664f8c79-8b37-4f2b-932e-885c1705fac8", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-262764_7298b6dd-6745-4a59-952c-997c92de40b4 became leader
	W1123 08:58:22.643933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:22.653275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:58:22.729646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-262764_7298b6dd-6745-4a59-952c-997c92de40b4!
	W1123 08:58:24.656865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:24.664453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:26.667884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:26.672923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:28.676917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:28.684204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:30.690359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:30.696333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-262764 -n default-k8s-diff-port-262764
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-262764 -n default-k8s-diff-port-262764: exit status 2 (382.008369ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-262764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-262764
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-262764:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c",
	        "Created": "2025-11-23T08:55:37.40456105Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1234047,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:57:21.13425064Z",
	            "FinishedAt": "2025-11-23T08:57:20.316512428Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/hostname",
	        "HostsPath": "/var/lib/docker/containers/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/hosts",
	        "LogPath": "/var/lib/docker/containers/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c-json.log",
	        "Name": "/default-k8s-diff-port-262764",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-262764:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-262764",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c",
	                "LowerDir": "/var/lib/docker/overlay2/f72313a8ebe5346b2a4f86d480258d4f1e2db66dfe4fbd251eebdfdd3ddbaac3-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f72313a8ebe5346b2a4f86d480258d4f1e2db66dfe4fbd251eebdfdd3ddbaac3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f72313a8ebe5346b2a4f86d480258d4f1e2db66dfe4fbd251eebdfdd3ddbaac3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f72313a8ebe5346b2a4f86d480258d4f1e2db66dfe4fbd251eebdfdd3ddbaac3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-262764",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-262764/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-262764",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-262764",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-262764",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d548a9f3fb2498cfdbf69e85fca871660a97f3d160c5a35f8b76417a01f26ef",
	            "SandboxKey": "/var/run/docker/netns/1d548a9f3fb2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34532"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34533"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34536"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34534"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34535"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-262764": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:0d:aa:c2:5f:02",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a88fa92783a732a39910d80f98969c606d7a2bdb381d5a678aa8210ce1334564",
	                    "EndpointID": "af2d1fbaec7a52a8b350af77093ad4356ff9a8bdf411787e4b3a900f77aa1f9d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-262764",
	                        "c3373e1079a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-262764 -n default-k8s-diff-port-262764
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-262764 -n default-k8s-diff-port-262764: exit status 2 (345.218385ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-262764 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-262764 logs -n 25: (1.295601678s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-194318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-194318          │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ delete  │ -p cert-options-194318                                                                                                                                                                                                                        │ cert-options-194318          │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:52 UTC │
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:52 UTC │ 23 Nov 25 08:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-283312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │                     │
	│ stop    │ -p old-k8s-version-283312 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-283312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:54 UTC │
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:55 UTC │
	│ image   │ old-k8s-version-283312 image list --format=json                                                                                                                                                                                               │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ pause   │ -p old-k8s-version-283312 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ delete  │ -p old-k8s-version-283312                                                                                                                                                                                                                     │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ delete  │ -p old-k8s-version-283312                                                                                                                                                                                                                     │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-expiration-322507 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-expiration-322507                                                                                                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-262764 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-262764 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-262764 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable metrics-server -p embed-certs-879861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p embed-certs-879861 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-879861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	│ image   │ default-k8s-diff-port-262764 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ pause   │ -p default-k8s-diff-port-262764 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:58:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:58:01.245850 1236855 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:58:01.245979 1236855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:01.245990 1236855 out.go:374] Setting ErrFile to fd 2...
	I1123 08:58:01.245996 1236855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:01.246338 1236855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:58:01.246812 1236855 out.go:368] Setting JSON to false
	I1123 08:58:01.248147 1236855 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34827,"bootTime":1763853455,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:58:01.248258 1236855 start.go:143] virtualization:  
	I1123 08:58:01.251250 1236855 out.go:179] * [embed-certs-879861] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:58:01.255061 1236855 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:58:01.255151 1236855 notify.go:221] Checking for updates...
	I1123 08:58:01.261015 1236855 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:58:01.263957 1236855 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:58:01.266914 1236855 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:58:01.269875 1236855 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:58:01.272664 1236855 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:58:01.276272 1236855 config.go:182] Loaded profile config "embed-certs-879861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:01.276834 1236855 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:58:01.307315 1236855 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:58:01.307474 1236855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:58:01.383876 1236855 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:58:01.368352796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:58:01.384000 1236855 docker.go:319] overlay module found
	I1123 08:58:01.387281 1236855 out.go:179] * Using the docker driver based on existing profile
	I1123 08:58:01.390201 1236855 start.go:309] selected driver: docker
	I1123 08:58:01.390227 1236855 start.go:927] validating driver "docker" against &{Name:embed-certs-879861 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:58:01.390351 1236855 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:58:01.391211 1236855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:58:01.451681 1236855 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:58:01.440639619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:58:01.452075 1236855 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:58:01.452107 1236855 cni.go:84] Creating CNI manager for ""
	I1123 08:58:01.452165 1236855 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:58:01.452202 1236855 start.go:353] cluster config:
	{Name:embed-certs-879861 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:58:01.457176 1236855 out.go:179] * Starting "embed-certs-879861" primary control-plane node in "embed-certs-879861" cluster
	I1123 08:58:01.460056 1236855 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:58:01.463117 1236855 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:58:01.465947 1236855 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:58:01.466005 1236855 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 08:58:01.466033 1236855 cache.go:65] Caching tarball of preloaded images
	I1123 08:58:01.466158 1236855 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:58:01.466180 1236855 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:58:01.466191 1236855 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:58:01.466501 1236855 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/config.json ...
	I1123 08:58:01.488647 1236855 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:58:01.488668 1236855 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:58:01.488688 1236855 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:58:01.488718 1236855 start.go:360] acquireMachinesLock for embed-certs-879861: {Name:mkc426f5135ca68e4cb995276c3947d42bb1e43d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:01.488775 1236855 start.go:364] duration metric: took 34.641µs to acquireMachinesLock for "embed-certs-879861"
	I1123 08:58:01.488798 1236855 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:58:01.488804 1236855 fix.go:54] fixHost starting: 
	I1123 08:58:01.489067 1236855 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:58:01.511277 1236855 fix.go:112] recreateIfNeeded on embed-certs-879861: state=Stopped err=<nil>
	W1123 08:58:01.511312 1236855 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 08:58:02.534240 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	W1123 08:58:04.536216 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	I1123 08:58:01.514461 1236855 out.go:252] * Restarting existing docker container for "embed-certs-879861" ...
	I1123 08:58:01.514552 1236855 cli_runner.go:164] Run: docker start embed-certs-879861
	I1123 08:58:01.813326 1236855 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:58:01.834941 1236855 kic.go:430] container "embed-certs-879861" state is running.
	I1123 08:58:01.835385 1236855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879861
	I1123 08:58:01.858469 1236855 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/config.json ...
	I1123 08:58:01.858700 1236855 machine.go:94] provisionDockerMachine start ...
	I1123 08:58:01.858769 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:01.886289 1236855 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:01.886651 1236855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1123 08:58:01.886661 1236855 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:58:01.887689 1236855 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:58:05.038606 1236855 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-879861
	
	I1123 08:58:05.038626 1236855 ubuntu.go:182] provisioning hostname "embed-certs-879861"
	I1123 08:58:05.038699 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:05.055983 1236855 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:05.056289 1236855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1123 08:58:05.056306 1236855 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-879861 && echo "embed-certs-879861" | sudo tee /etc/hostname
	I1123 08:58:05.221105 1236855 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-879861
	
	I1123 08:58:05.221245 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:05.239464 1236855 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:05.239771 1236855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1123 08:58:05.239786 1236855 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-879861' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-879861/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-879861' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:58:05.391232 1236855 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:58:05.391255 1236855 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:58:05.391287 1236855 ubuntu.go:190] setting up certificates
	I1123 08:58:05.391297 1236855 provision.go:84] configureAuth start
	I1123 08:58:05.391359 1236855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879861
	I1123 08:58:05.408707 1236855 provision.go:143] copyHostCerts
	I1123 08:58:05.408777 1236855 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:58:05.408795 1236855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:58:05.408872 1236855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:58:05.408978 1236855 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:58:05.408991 1236855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:58:05.409019 1236855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:58:05.409077 1236855 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:58:05.409087 1236855 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:58:05.409109 1236855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:58:05.409160 1236855 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.embed-certs-879861 san=[127.0.0.1 192.168.76.2 embed-certs-879861 localhost minikube]
	I1123 08:58:05.621323 1236855 provision.go:177] copyRemoteCerts
	I1123 08:58:05.621399 1236855 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:58:05.621448 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:05.639150 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:05.751078 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:58:05.772215 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 08:58:05.790833 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:58:05.812160 1236855 provision.go:87] duration metric: took 420.838655ms to configureAuth
	I1123 08:58:05.812243 1236855 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:58:05.812464 1236855 config.go:182] Loaded profile config "embed-certs-879861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:05.812576 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:05.829810 1236855 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:05.830161 1236855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1123 08:58:05.830176 1236855 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:58:06.221677 1236855 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:58:06.221742 1236855 machine.go:97] duration metric: took 4.363031457s to provisionDockerMachine
	I1123 08:58:06.221769 1236855 start.go:293] postStartSetup for "embed-certs-879861" (driver="docker")
	I1123 08:58:06.221794 1236855 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:58:06.221871 1236855 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:58:06.221926 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:06.242878 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:06.351476 1236855 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:58:06.354721 1236855 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:58:06.354747 1236855 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:58:06.354758 1236855 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:58:06.354813 1236855 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:58:06.354891 1236855 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:58:06.354987 1236855 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:58:06.362381 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:58:06.388185 1236855 start.go:296] duration metric: took 166.386877ms for postStartSetup
	I1123 08:58:06.388307 1236855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:58:06.388374 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:06.404608 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:06.504077 1236855 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:58:06.509104 1236855 fix.go:56] duration metric: took 5.020293186s for fixHost
	I1123 08:58:06.509132 1236855 start.go:83] releasing machines lock for "embed-certs-879861", held for 5.020344302s
	I1123 08:58:06.509220 1236855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879861
	I1123 08:58:06.526830 1236855 ssh_runner.go:195] Run: cat /version.json
	I1123 08:58:06.526886 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:06.527205 1236855 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:58:06.527279 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:06.546668 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:06.556863 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:06.654816 1236855 ssh_runner.go:195] Run: systemctl --version
	I1123 08:58:06.762890 1236855 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:58:06.806604 1236855 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:58:06.811088 1236855 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:58:06.811165 1236855 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:58:06.820122 1236855 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:58:06.820191 1236855 start.go:496] detecting cgroup driver to use...
	I1123 08:58:06.820237 1236855 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:58:06.820334 1236855 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:58:06.835904 1236855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:58:06.849126 1236855 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:58:06.849233 1236855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:58:06.865607 1236855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:58:06.879312 1236855 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:58:06.999841 1236855 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:58:07.128936 1236855 docker.go:234] disabling docker service ...
	I1123 08:58:07.129058 1236855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:58:07.146044 1236855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:58:07.159230 1236855 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:58:07.274713 1236855 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:58:07.413016 1236855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:58:07.426892 1236855 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:58:07.440904 1236855 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:58:07.440984 1236855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.449761 1236855 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:58:07.449852 1236855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.458645 1236855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.467768 1236855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.477440 1236855 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:58:07.485588 1236855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.494867 1236855 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.504434 1236855 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:07.513107 1236855 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:58:07.521654 1236855 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:58:07.529449 1236855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:07.650264 1236855 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:58:07.843010 1236855 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:58:07.843123 1236855 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:58:07.846968 1236855 start.go:564] Will wait 60s for crictl version
	I1123 08:58:07.847063 1236855 ssh_runner.go:195] Run: which crictl
	I1123 08:58:07.850701 1236855 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:58:07.879793 1236855 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:58:07.879889 1236855 ssh_runner.go:195] Run: crio --version
	I1123 08:58:07.913507 1236855 ssh_runner.go:195] Run: crio --version
	I1123 08:58:07.956938 1236855 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:58:07.959814 1236855 cli_runner.go:164] Run: docker network inspect embed-certs-879861 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:58:07.976441 1236855 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:58:07.980347 1236855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:58:07.991673 1236855 kubeadm.go:884] updating cluster {Name:embed-certs-879861 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:58:07.991815 1236855 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:58:07.991874 1236855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:58:08.028154 1236855 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:58:08.028179 1236855 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:58:08.028236 1236855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:58:08.057403 1236855 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:58:08.057425 1236855 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:58:08.057433 1236855 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 08:58:08.057537 1236855 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-879861 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:58:08.057624 1236855 ssh_runner.go:195] Run: crio config
	I1123 08:58:08.138866 1236855 cni.go:84] Creating CNI manager for ""
	I1123 08:58:08.138890 1236855 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:58:08.138917 1236855 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:58:08.138948 1236855 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-879861 NodeName:embed-certs-879861 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:58:08.139096 1236855 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-879861"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:58:08.139207 1236855 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:58:08.147604 1236855 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:58:08.147670 1236855 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:58:08.155096 1236855 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 08:58:08.167769 1236855 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:58:08.180984 1236855 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1123 08:58:08.193661 1236855 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:58:08.197178 1236855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:58:08.206763 1236855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:08.326188 1236855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:58:08.343659 1236855 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861 for IP: 192.168.76.2
	I1123 08:58:08.343720 1236855 certs.go:195] generating shared ca certs ...
	I1123 08:58:08.343753 1236855 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:08.343896 1236855 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:58:08.343986 1236855 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:58:08.344012 1236855 certs.go:257] generating profile certs ...
	I1123 08:58:08.344120 1236855 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/client.key
	I1123 08:58:08.344216 1236855 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.key.a22c785f
	I1123 08:58:08.344285 1236855 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.key
	I1123 08:58:08.344422 1236855 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:58:08.344484 1236855 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:58:08.344507 1236855 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:58:08.344580 1236855 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:58:08.344632 1236855 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:58:08.344692 1236855 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:58:08.344778 1236855 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:58:08.345441 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:58:08.370014 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:58:08.392743 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:58:08.413851 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:58:08.434720 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:58:08.456111 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:58:08.476348 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:58:08.497193 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/embed-certs-879861/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:58:08.520486 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 08:58:08.553773 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:58:08.573662 1236855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:58:08.597998 1236855 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:58:08.621056 1236855 ssh_runner.go:195] Run: openssl version
	I1123 08:58:08.634376 1236855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:58:08.645138 1236855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:58:08.649237 1236855 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:58:08.649349 1236855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:58:08.698601 1236855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:58:08.706766 1236855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:58:08.714882 1236855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:58:08.718890 1236855 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:58:08.718958 1236855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:58:08.763722 1236855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:58:08.771420 1236855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:58:08.779364 1236855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:58:08.782845 1236855 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:58:08.782905 1236855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:58:08.823865 1236855 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:58:08.832290 1236855 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:58:08.836197 1236855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:58:08.877335 1236855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:58:08.921282 1236855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:58:08.969192 1236855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:58:09.010067 1236855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:58:09.053481 1236855 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:58:09.094742 1236855 kubeadm.go:401] StartCluster: {Name:embed-certs-879861 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879861 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:58:09.094843 1236855 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:58:09.094908 1236855 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:58:09.130267 1236855 cri.go:89] found id: ""
	I1123 08:58:09.130339 1236855 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:58:09.138625 1236855 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:58:09.138650 1236855 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:58:09.138698 1236855 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:58:09.146650 1236855 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:58:09.147281 1236855 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-879861" does not appear in /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:58:09.147606 1236855 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-1041293/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-879861" cluster setting kubeconfig missing "embed-certs-879861" context setting]
	I1123 08:58:09.148131 1236855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:09.149462 1236855 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:58:09.166519 1236855 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 08:58:09.166562 1236855 kubeadm.go:602] duration metric: took 27.906542ms to restartPrimaryControlPlane
	I1123 08:58:09.166595 1236855 kubeadm.go:403] duration metric: took 71.855959ms to StartCluster
	I1123 08:58:09.166633 1236855 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:09.166717 1236855 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:58:09.168176 1236855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:09.168447 1236855 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:58:09.168743 1236855 config.go:182] Loaded profile config "embed-certs-879861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:09.168949 1236855 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:58:09.169037 1236855 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-879861"
	I1123 08:58:09.169088 1236855 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-879861"
	W1123 08:58:09.169102 1236855 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:58:09.169119 1236855 addons.go:70] Setting default-storageclass=true in profile "embed-certs-879861"
	I1123 08:58:09.169148 1236855 host.go:66] Checking if "embed-certs-879861" exists ...
	I1123 08:58:09.169087 1236855 addons.go:70] Setting dashboard=true in profile "embed-certs-879861"
	I1123 08:58:09.169217 1236855 addons.go:239] Setting addon dashboard=true in "embed-certs-879861"
	W1123 08:58:09.169245 1236855 addons.go:248] addon dashboard should already be in state true
	I1123 08:58:09.169299 1236855 host.go:66] Checking if "embed-certs-879861" exists ...
	I1123 08:58:09.169755 1236855 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:58:09.169926 1236855 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:58:09.169152 1236855 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-879861"
	I1123 08:58:09.171828 1236855 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:58:09.175062 1236855 out.go:179] * Verifying Kubernetes components...
	I1123 08:58:09.179710 1236855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:09.216277 1236855 addons.go:239] Setting addon default-storageclass=true in "embed-certs-879861"
	W1123 08:58:09.216304 1236855 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:58:09.216335 1236855 host.go:66] Checking if "embed-certs-879861" exists ...
	I1123 08:58:09.217066 1236855 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:58:09.267040 1236855 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:09.267286 1236855 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 08:58:09.276473 1236855 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1123 08:58:07.036839 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	W1123 08:58:09.040384 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	I1123 08:58:09.276544 1236855 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:58:09.276558 1236855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:58:09.276637 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:09.279833 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:58:09.279864 1236855 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:58:09.279939 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:09.312028 1236855 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:58:09.312050 1236855 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:58:09.312107 1236855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:58:09.339339 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:09.351418 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:09.366166 1236855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:58:09.580555 1236855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:58:09.612558 1236855 node_ready.go:35] waiting up to 6m0s for node "embed-certs-879861" to be "Ready" ...
	I1123 08:58:09.680479 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:58:09.680553 1236855 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:58:09.686657 1236855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:58:09.692710 1236855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:58:09.748095 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:58:09.748177 1236855 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:58:09.823331 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:58:09.823411 1236855 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:58:09.903822 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:58:09.903893 1236855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:58:09.923916 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:58:09.923995 1236855 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:58:09.986951 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:58:09.987024 1236855 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:58:10.016449 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:58:10.016530 1236855 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:58:10.051552 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:58:10.051631 1236855 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:58:10.073996 1236855 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:58:10.074093 1236855 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:58:10.096274 1236855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1123 08:58:11.540595 1233920 pod_ready.go:104] pod "coredns-66bc5c9577-mmrrf" is not "Ready", error: <nil>
	I1123 08:58:13.039861 1233920 pod_ready.go:94] pod "coredns-66bc5c9577-mmrrf" is "Ready"
	I1123 08:58:13.039894 1233920 pod_ready.go:86] duration metric: took 37.510553338s for pod "coredns-66bc5c9577-mmrrf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.043255 1233920 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.048606 1233920 pod_ready.go:94] pod "etcd-default-k8s-diff-port-262764" is "Ready"
	I1123 08:58:13.048635 1233920 pod_ready.go:86] duration metric: took 5.351629ms for pod "etcd-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.051202 1233920 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.060208 1233920 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-262764" is "Ready"
	I1123 08:58:13.060239 1233920 pod_ready.go:86] duration metric: took 9.009495ms for pod "kube-apiserver-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.063047 1233920 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.233326 1233920 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-262764" is "Ready"
	I1123 08:58:13.233368 1233920 pod_ready.go:86] duration metric: took 170.293695ms for pod "kube-controller-manager-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.432475 1233920 pod_ready.go:83] waiting for pod "kube-proxy-9thkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:13.832840 1233920 pod_ready.go:94] pod "kube-proxy-9thkr" is "Ready"
	I1123 08:58:13.832872 1233920 pod_ready.go:86] duration metric: took 400.368296ms for pod "kube-proxy-9thkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:14.033626 1233920 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:14.433115 1233920 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-262764" is "Ready"
	I1123 08:58:14.433146 1233920 pod_ready.go:86] duration metric: took 399.488197ms for pod "kube-scheduler-default-k8s-diff-port-262764" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:14.433159 1233920 pod_ready.go:40] duration metric: took 38.973132264s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:58:14.523412 1233920 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:58:14.526699 1233920 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-262764" cluster and "default" namespace by default
	I1123 08:58:13.939474 1236855 node_ready.go:49] node "embed-certs-879861" is "Ready"
	I1123 08:58:13.939502 1236855 node_ready.go:38] duration metric: took 4.326856539s for node "embed-certs-879861" to be "Ready" ...
	I1123 08:58:13.939514 1236855 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:58:13.939571 1236855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:58:15.830486 1236855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.143757563s)
	I1123 08:58:15.830525 1236855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.137733296s)
	I1123 08:58:15.888901 1236855 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.949307444s)
	I1123 08:58:15.888935 1236855 api_server.go:72] duration metric: took 6.720342132s to wait for apiserver process to appear ...
	I1123 08:58:15.888941 1236855 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:58:15.888959 1236855 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:58:15.889767 1236855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.793401902s)
	I1123 08:58:15.892771 1236855 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-879861 addons enable metrics-server
	
	I1123 08:58:15.896516 1236855 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1123 08:58:15.899415 1236855 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 08:58:15.900542 1236855 api_server.go:141] control plane version: v1.34.1
	I1123 08:58:15.900564 1236855 api_server.go:131] duration metric: took 11.61679ms to wait for apiserver health ...
	I1123 08:58:15.900573 1236855 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:58:15.901487 1236855 addons.go:530] duration metric: took 6.732536342s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1123 08:58:15.904191 1236855 system_pods.go:59] 8 kube-system pods found
	I1123 08:58:15.904230 1236855 system_pods.go:61] "coredns-66bc5c9577-r5lt5" [c470da65-70be-4126-90eb-0434f6668546] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:58:15.904240 1236855 system_pods.go:61] "etcd-embed-certs-879861" [bfcc5c7b-69bf-4a5e-a473-ec3b9d4c1a98] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:58:15.904246 1236855 system_pods.go:61] "kindnet-f6j8g" [973f09b1-28dd-40ea-9180-85020f65a04e] Running
	I1123 08:58:15.904253 1236855 system_pods.go:61] "kube-apiserver-embed-certs-879861" [d3d9369f-cc37-484a-a5b9-bbe97c1b1a51] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:58:15.904268 1236855 system_pods.go:61] "kube-controller-manager-embed-certs-879861" [02779370-efc5-438a-a94c-4fc12286c2fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:58:15.904281 1236855 system_pods.go:61] "kube-proxy-bf5ck" [37c2f985-65de-4d46-955d-3767fe0f32a2] Running
	I1123 08:58:15.904288 1236855 system_pods.go:61] "kube-scheduler-embed-certs-879861" [dab432a6-c8f8-4282-b842-bf07ca17e9e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:58:15.904293 1236855 system_pods.go:61] "storage-provisioner" [cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89] Running
	I1123 08:58:15.904302 1236855 system_pods.go:74] duration metric: took 3.72367ms to wait for pod list to return data ...
	I1123 08:58:15.904309 1236855 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:58:15.906630 1236855 default_sa.go:45] found service account: "default"
	I1123 08:58:15.906650 1236855 default_sa.go:55] duration metric: took 2.332841ms for default service account to be created ...
	I1123 08:58:15.906658 1236855 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:58:15.909581 1236855 system_pods.go:86] 8 kube-system pods found
	I1123 08:58:15.909613 1236855 system_pods.go:89] "coredns-66bc5c9577-r5lt5" [c470da65-70be-4126-90eb-0434f6668546] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:58:15.909631 1236855 system_pods.go:89] "etcd-embed-certs-879861" [bfcc5c7b-69bf-4a5e-a473-ec3b9d4c1a98] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:58:15.909638 1236855 system_pods.go:89] "kindnet-f6j8g" [973f09b1-28dd-40ea-9180-85020f65a04e] Running
	I1123 08:58:15.909644 1236855 system_pods.go:89] "kube-apiserver-embed-certs-879861" [d3d9369f-cc37-484a-a5b9-bbe97c1b1a51] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:58:15.909657 1236855 system_pods.go:89] "kube-controller-manager-embed-certs-879861" [02779370-efc5-438a-a94c-4fc12286c2fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:58:15.909667 1236855 system_pods.go:89] "kube-proxy-bf5ck" [37c2f985-65de-4d46-955d-3767fe0f32a2] Running
	I1123 08:58:15.909674 1236855 system_pods.go:89] "kube-scheduler-embed-certs-879861" [dab432a6-c8f8-4282-b842-bf07ca17e9e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:58:15.909678 1236855 system_pods.go:89] "storage-provisioner" [cd4e1daf-5ae4-4ebc-b4a1-464686ee3f89] Running
	I1123 08:58:15.909691 1236855 system_pods.go:126] duration metric: took 3.027837ms to wait for k8s-apps to be running ...
	I1123 08:58:15.909705 1236855 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:58:15.909764 1236855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:58:15.925091 1236855 system_svc.go:56] duration metric: took 15.383003ms WaitForService to wait for kubelet
	I1123 08:58:15.925132 1236855 kubeadm.go:587] duration metric: took 6.756537656s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:58:15.925151 1236855 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:58:15.930190 1236855 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:58:15.930221 1236855 node_conditions.go:123] node cpu capacity is 2
	I1123 08:58:15.930235 1236855 node_conditions.go:105] duration metric: took 5.078471ms to run NodePressure ...
	I1123 08:58:15.930248 1236855 start.go:242] waiting for startup goroutines ...
	I1123 08:58:15.930255 1236855 start.go:247] waiting for cluster config update ...
	I1123 08:58:15.930266 1236855 start.go:256] writing updated cluster config ...
	I1123 08:58:15.930551 1236855 ssh_runner.go:195] Run: rm -f paused
	I1123 08:58:15.942034 1236855 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:58:15.945976 1236855 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r5lt5" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:58:17.978850 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:20.450934 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:22.452041 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:24.452810 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:26.952033 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:29.480381 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.881858272Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9c7a3923-2ac8-4e5d-84e3-5dc993ff8e2b name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.883280066Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e1c6c386-c926-41b1-934e-bb7cb66ac785 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.884429695Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b/dashboard-metrics-scraper" id=662b2c66-c07b-4977-87e7-40262275927a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.884535432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.913059819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.913892542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.952831489Z" level=info msg="Created container 51840ae2191430c19acbea32b7a4ed57fb678cd00bc67fb057f6a3ac7a3f536d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b/dashboard-metrics-scraper" id=662b2c66-c07b-4977-87e7-40262275927a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.95425138Z" level=info msg="Starting container: 51840ae2191430c19acbea32b7a4ed57fb678cd00bc67fb057f6a3ac7a3f536d" id=0a672906-750a-4d7a-9a44-bffe7038ed23 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:58:09 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:09.957758039Z" level=info msg="Started container" PID=1647 containerID=51840ae2191430c19acbea32b7a4ed57fb678cd00bc67fb057f6a3ac7a3f536d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b/dashboard-metrics-scraper id=0a672906-750a-4d7a-9a44-bffe7038ed23 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9bec0c7b6fad0e42bb807156ce5a5287e45ec5f8ff702b84cde2cd4d992c8b30
	Nov 23 08:58:09 default-k8s-diff-port-262764 conmon[1645]: conmon 51840ae2191430c19acb <ninfo>: container 1647 exited with status 1
	Nov 23 08:58:10 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:10.157344223Z" level=info msg="Removing container: d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478" id=b4d8543e-60a6-4105-bf46-345be40910d5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:58:10 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:10.171267739Z" level=info msg="Error loading conmon cgroup of container d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478: cgroup deleted" id=b4d8543e-60a6-4105-bf46-345be40910d5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:58:10 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:10.175615064Z" level=info msg="Removed container d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b/dashboard-metrics-scraper" id=b4d8543e-60a6-4105-bf46-345be40910d5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.738515688Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.751750082Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.751788654Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.751813014Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.761805809Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.761980122Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.762074995Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.767510847Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.767661473Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.76774193Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.777993385Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:14 default-k8s-diff-port-262764 crio[655]: time="2025-11-23T08:58:14.778035328Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	51840ae219143       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   9bec0c7b6fad0       dashboard-metrics-scraper-6ffb444bf9-vqt6b             kubernetes-dashboard
	e721f02a88931       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   b6730cf8dc51b       storage-provisioner                                    kube-system
	fff43317f3f83       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   49 seconds ago       Running             kubernetes-dashboard        0                   abfb5c6a0b6a7       kubernetes-dashboard-855c9754f9-pcsrh                  kubernetes-dashboard
	60cde3c4410b8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   6699b18b20982       coredns-66bc5c9577-mmrrf                               kube-system
	2ced1aed02ad0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   6ab993bcc410b       kube-proxy-9thkr                                       kube-system
	13abf7f01ac25       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   059504158c0e5       busybox                                                default
	07c4bdb8689c5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   97e0cfdd953a4       kindnet-xsm2q                                          kube-system
	4566a35049add       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   b6730cf8dc51b       storage-provisioner                                    kube-system
	844d5c6d2fdc6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   1907c35aff34e       etcd-default-k8s-diff-port-262764                      kube-system
	9183b8d5f0167       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   f9f716368826c       kube-scheduler-default-k8s-diff-port-262764            kube-system
	3c79c59cf7838       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   d6c1cdafb98ee       kube-apiserver-default-k8s-diff-port-262764            kube-system
	69a0aeac49139       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   bf751ca5cdd3d       kube-controller-manager-default-k8s-diff-port-262764   kube-system
	
	
	==> coredns [60cde3c4410b8d5c0f52861bdff9ef2cbfc4e321255b604d3b58a908126f5ad5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59925 - 34987 "HINFO IN 99384716890802852.7991162138811048242. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.013903013s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-262764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-262764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=default-k8s-diff-port-262764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_56_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:56:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-262764
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:58:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:58:04 +0000   Sun, 23 Nov 2025 08:55:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:58:04 +0000   Sun, 23 Nov 2025 08:55:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:58:04 +0000   Sun, 23 Nov 2025 08:55:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:58:04 +0000   Sun, 23 Nov 2025 08:56:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-262764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                9167756b-ee2d-4d27-ae18-a988612654cb
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-mmrrf                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-262764                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-xsm2q                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-262764             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-262764    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-9thkr                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-default-k8s-diff-port-262764             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vqt6b              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pcsrh                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s                  node-controller  Node default-k8s-diff-port-262764 event: Registered Node default-k8s-diff-port-262764 in Controller
	  Normal   NodeReady                99s                    kubelet          Node default-k8s-diff-port-262764 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-262764 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node default-k8s-diff-port-262764 event: Registered Node default-k8s-diff-port-262764 in Controller
	
	
	==> dmesg <==
	[Nov23 08:35] overlayfs: idmapped layers are currently not supported
	[Nov23 08:36] overlayfs: idmapped layers are currently not supported
	[Nov23 08:37] overlayfs: idmapped layers are currently not supported
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	[Nov23 08:55] overlayfs: idmapped layers are currently not supported
	[Nov23 08:56] overlayfs: idmapped layers are currently not supported
	[Nov23 08:57] overlayfs: idmapped layers are currently not supported
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [844d5c6d2fdc6889c36b9911a4a6534e4317818c129eb910010b6c0ffb4f03f7] <==
	{"level":"warn","ts":"2025-11-23T08:57:31.672851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.709134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.750900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.799964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.851857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.885826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.923504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.952000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.964813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:31.984649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.012058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.024051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.056922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.075701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.102803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.130030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.146048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.174115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.227042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.255709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.285644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.321341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.345235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.361620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:57:32.451864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60942","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:58:32 up  9:40,  0 user,  load average: 4.07, 3.30, 2.74
	Linux default-k8s-diff-port-262764 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [07c4bdb8689c57d9887abb8863977f22eb98f12f2443cd2e95a9a97f5068a9cb] <==
	I1123 08:57:34.540510       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:57:34.540683       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:57:34.540800       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:57:34.540812       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:57:34.540824       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:57:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:57:34.747433       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:57:34.747460       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:57:34.747470       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:57:34.747587       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:58:04.747806       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:58:04.747806       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:58:04.747926       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 08:58:04.748061       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 08:58:06.348323       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:58:06.348367       1 metrics.go:72] Registering metrics
	I1123 08:58:06.348421       1 controller.go:711] "Syncing nftables rules"
	I1123 08:58:14.738213       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:58:14.738265       1 main.go:301] handling current node
	I1123 08:58:24.743255       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:58:24.743292       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3c79c59cf7838dc0d18f5c3de6bc6a24338c907a9104ae14d156735b130d2671] <==
	I1123 08:57:33.778774       1 aggregator.go:171] initial CRD sync complete...
	I1123 08:57:33.778786       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:57:33.778792       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:57:33.778799       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:57:33.778969       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:57:33.802175       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 08:57:33.802742       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:57:33.844807       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:57:33.862726       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 08:57:33.862818       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 08:57:33.862825       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 08:57:33.869707       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:57:33.884605       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1123 08:57:33.892898       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:57:34.059494       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:57:34.478233       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:57:34.925040       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:57:34.990884       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:57:35.036865       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:57:35.051896       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:57:35.289314       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.91.28"}
	I1123 08:57:35.306506       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.197.117"}
	I1123 08:57:37.439928       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:57:37.489812       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:57:37.601041       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [69a0aeac491393aeac0ffcc4bc7ed28f76ff736f9b82dde46869747ff492411b] <==
	I1123 08:57:37.005317       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:57:37.006875       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:57:37.011091       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:57:37.015791       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:57:37.016509       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:57:37.024703       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:57:37.030017       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:57:37.030129       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:57:37.030143       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:57:37.030373       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:57:37.030442       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:57:37.030475       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:57:37.030504       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:57:37.030579       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:57:37.030669       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:57:37.030760       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-262764"
	I1123 08:57:37.030833       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 08:57:37.031421       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:57:37.033883       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:57:37.033960       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:57:37.036049       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:57:37.036119       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:57:37.039069       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:57:37.044196       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:57:37.050660       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [2ced1aed02ad0aec279a98d53d3a1bae737d38f73e10188df9c03f82b985a38f] <==
	I1123 08:57:34.996032       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:57:35.269412       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:57:35.369677       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:57:35.369713       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:57:35.369797       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:57:35.422946       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:57:35.423057       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:57:35.430275       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:57:35.430611       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:57:35.431377       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:57:35.432944       1 config.go:200] "Starting service config controller"
	I1123 08:57:35.432997       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:57:35.433042       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:57:35.433069       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:57:35.433776       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:57:35.433817       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:57:35.434465       1 config.go:309] "Starting node config controller"
	I1123 08:57:35.435037       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:57:35.435085       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:57:35.533123       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:57:35.534370       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:57:35.534373       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9183b8d5f0167d65acae545428bcefaad15989e0187470c12fabe000b501d7b6] <==
	I1123 08:57:31.637059       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:57:34.388581       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:57:34.388608       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:57:34.398667       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:57:34.398746       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:57:34.398762       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:57:34.398787       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:57:34.407153       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:57:34.407168       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:57:34.421189       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:57:34.421213       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:57:34.499884       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 08:57:34.507738       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:57:34.523332       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:57:34 default-k8s-diff-port-262764 kubelet[784]: W1123 08:57:34.333440     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/crio-059504158c0e54c18c6348a98e608dce0d0368ef955e6b0b418e3c5e9b722c0f WatchSource:0}: Error finding container 059504158c0e54c18c6348a98e608dce0d0368ef955e6b0b418e3c5e9b722c0f: Status 404 returned error can't find the container with id 059504158c0e54c18c6348a98e608dce0d0368ef955e6b0b418e3c5e9b722c0f
	Nov 23 08:57:34 default-k8s-diff-port-262764 kubelet[784]: W1123 08:57:34.497810     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/crio-6ab993bcc410bbe38ac668d3f3dee3c9518e0a75064a8e0ef740359927ac40e9 WatchSource:0}: Error finding container 6ab993bcc410bbe38ac668d3f3dee3c9518e0a75064a8e0ef740359927ac40e9: Status 404 returned error can't find the container with id 6ab993bcc410bbe38ac668d3f3dee3c9518e0a75064a8e0ef740359927ac40e9
	Nov 23 08:57:37 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:37.680030     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/30d0a90d-21de-40ab-802a-ef4067be718b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-pcsrh\" (UID: \"30d0a90d-21de-40ab-802a-ef4067be718b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pcsrh"
	Nov 23 08:57:37 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:37.680085     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvlr\" (UniqueName: \"kubernetes.io/projected/f7311d48-111b-4b4a-adce-2e7dab6310d3-kube-api-access-txvlr\") pod \"dashboard-metrics-scraper-6ffb444bf9-vqt6b\" (UID: \"f7311d48-111b-4b4a-adce-2e7dab6310d3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b"
	Nov 23 08:57:37 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:37.680117     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f7311d48-111b-4b4a-adce-2e7dab6310d3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vqt6b\" (UID: \"f7311d48-111b-4b4a-adce-2e7dab6310d3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b"
	Nov 23 08:57:37 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:37.680143     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb5hg\" (UniqueName: \"kubernetes.io/projected/30d0a90d-21de-40ab-802a-ef4067be718b-kube-api-access-vb5hg\") pod \"kubernetes-dashboard-855c9754f9-pcsrh\" (UID: \"30d0a90d-21de-40ab-802a-ef4067be718b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pcsrh"
	Nov 23 08:57:37 default-k8s-diff-port-262764 kubelet[784]: W1123 08:57:37.957062     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c3373e1079a61112d42134ac393010b57cb5548a12d01c339bf9415c0fba841c/crio-9bec0c7b6fad0e42bb807156ce5a5287e45ec5f8ff702b84cde2cd4d992c8b30 WatchSource:0}: Error finding container 9bec0c7b6fad0e42bb807156ce5a5287e45ec5f8ff702b84cde2cd4d992c8b30: Status 404 returned error can't find the container with id 9bec0c7b6fad0e42bb807156ce5a5287e45ec5f8ff702b84cde2cd4d992c8b30
	Nov 23 08:57:42 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:42.695315     784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 08:57:49 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:49.088729     784 scope.go:117] "RemoveContainer" containerID="b523f231039c5ec4314304cd5bafa1975ba4b318b1e289736564c3c8cde28e3d"
	Nov 23 08:57:49 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:49.139245     784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pcsrh" podStartSLOduration=7.312493387 podStartE2EDuration="12.131715423s" podCreationTimestamp="2025-11-23 08:57:37 +0000 UTC" firstStartedPulling="2025-11-23 08:57:37.941323684 +0000 UTC m=+10.217884598" lastFinishedPulling="2025-11-23 08:57:42.76054572 +0000 UTC m=+15.037106634" observedRunningTime="2025-11-23 08:57:43.07638325 +0000 UTC m=+15.352944164" watchObservedRunningTime="2025-11-23 08:57:49.131715423 +0000 UTC m=+21.408276337"
	Nov 23 08:57:50 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:50.093964     784 scope.go:117] "RemoveContainer" containerID="b523f231039c5ec4314304cd5bafa1975ba4b318b1e289736564c3c8cde28e3d"
	Nov 23 08:57:50 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:50.094347     784 scope.go:117] "RemoveContainer" containerID="d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478"
	Nov 23 08:57:50 default-k8s-diff-port-262764 kubelet[784]: E1123 08:57:50.094503     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqt6b_kubernetes-dashboard(f7311d48-111b-4b4a-adce-2e7dab6310d3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b" podUID="f7311d48-111b-4b4a-adce-2e7dab6310d3"
	Nov 23 08:57:57 default-k8s-diff-port-262764 kubelet[784]: I1123 08:57:57.900092     784 scope.go:117] "RemoveContainer" containerID="d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478"
	Nov 23 08:57:57 default-k8s-diff-port-262764 kubelet[784]: E1123 08:57:57.900717     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqt6b_kubernetes-dashboard(f7311d48-111b-4b4a-adce-2e7dab6310d3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b" podUID="f7311d48-111b-4b4a-adce-2e7dab6310d3"
	Nov 23 08:58:05 default-k8s-diff-port-262764 kubelet[784]: I1123 08:58:05.138241     784 scope.go:117] "RemoveContainer" containerID="4566a35049addd0b5ec2596842648d3a7c893e58c6ca48d9da4742ea7108e0c6"
	Nov 23 08:58:09 default-k8s-diff-port-262764 kubelet[784]: I1123 08:58:09.881228     784 scope.go:117] "RemoveContainer" containerID="d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478"
	Nov 23 08:58:10 default-k8s-diff-port-262764 kubelet[784]: I1123 08:58:10.155641     784 scope.go:117] "RemoveContainer" containerID="d91db26d7a423c34ed7194207ac90a0603a61f353e4756ede7904a1575c13478"
	Nov 23 08:58:11 default-k8s-diff-port-262764 kubelet[784]: I1123 08:58:11.160079     784 scope.go:117] "RemoveContainer" containerID="51840ae2191430c19acbea32b7a4ed57fb678cd00bc67fb057f6a3ac7a3f536d"
	Nov 23 08:58:11 default-k8s-diff-port-262764 kubelet[784]: E1123 08:58:11.160245     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqt6b_kubernetes-dashboard(f7311d48-111b-4b4a-adce-2e7dab6310d3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b" podUID="f7311d48-111b-4b4a-adce-2e7dab6310d3"
	Nov 23 08:58:17 default-k8s-diff-port-262764 kubelet[784]: I1123 08:58:17.900027     784 scope.go:117] "RemoveContainer" containerID="51840ae2191430c19acbea32b7a4ed57fb678cd00bc67fb057f6a3ac7a3f536d"
	Nov 23 08:58:17 default-k8s-diff-port-262764 kubelet[784]: E1123 08:58:17.900620     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vqt6b_kubernetes-dashboard(f7311d48-111b-4b4a-adce-2e7dab6310d3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vqt6b" podUID="f7311d48-111b-4b4a-adce-2e7dab6310d3"
	Nov 23 08:58:28 default-k8s-diff-port-262764 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:58:28 default-k8s-diff-port-262764 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:58:28 default-k8s-diff-port-262764 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [fff43317f3f83aa6e5f347825226e4b1c677289710286c8a61446e42ac8bfdf1] <==
	2025/11/23 08:57:42 Starting overwatch
	2025/11/23 08:57:42 Using namespace: kubernetes-dashboard
	2025/11/23 08:57:42 Using in-cluster config to connect to apiserver
	2025/11/23 08:57:42 Using secret token for csrf signing
	2025/11/23 08:57:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:57:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:57:42 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 08:57:42 Generating JWE encryption key
	2025/11/23 08:57:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:57:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:57:43 Initializing JWE encryption key from synchronized object
	2025/11/23 08:57:43 Creating in-cluster Sidecar client
	2025/11/23 08:57:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:57:43 Serving insecurely on HTTP port: 9090
	2025/11/23 08:58:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4566a35049addd0b5ec2596842648d3a7c893e58c6ca48d9da4742ea7108e0c6] <==
	I1123 08:57:34.904934       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:58:04.906901       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e721f02a88931cc3b946e7a2e214cebe713103c21c3212acd6d50e28153ad017] <==
	I1123 08:58:05.208840       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:58:05.208989       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:58:05.218361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:08.673901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:12.947815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:16.545824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:19.598822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:22.621357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:22.626820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:58:22.627054       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:58:22.627286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-262764_7298b6dd-6745-4a59-952c-997c92de40b4!
	I1123 08:58:22.628225       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"664f8c79-8b37-4f2b-932e-885c1705fac8", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-262764_7298b6dd-6745-4a59-952c-997c92de40b4 became leader
	W1123 08:58:22.643933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:22.653275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:58:22.729646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-262764_7298b6dd-6745-4a59-952c-997c92de40b4!
	W1123 08:58:24.656865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:24.664453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:26.667884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:26.672923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:28.676917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:28.684204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:30.690359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:30.696333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:32.699896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:32.707950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-262764 -n default-k8s-diff-port-262764
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-262764 -n default-k8s-diff-port-262764: exit status 2 (386.759381ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-262764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-879861 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-879861 --alsologtostderr -v=1: exit status 80 (1.831371866s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-879861 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:59:06.376469 1243074 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:59:06.376624 1243074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:06.376631 1243074 out.go:374] Setting ErrFile to fd 2...
	I1123 08:59:06.376636 1243074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:06.376872 1243074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:59:06.377093 1243074 out.go:368] Setting JSON to false
	I1123 08:59:06.377110 1243074 mustload.go:66] Loading cluster: embed-certs-879861
	I1123 08:59:06.377487 1243074 config.go:182] Loaded profile config "embed-certs-879861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:59:06.377916 1243074 cli_runner.go:164] Run: docker container inspect embed-certs-879861 --format={{.State.Status}}
	I1123 08:59:06.404083 1243074 host.go:66] Checking if "embed-certs-879861" exists ...
	I1123 08:59:06.404399 1243074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:06.480903 1243074 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:78 SystemTime:2025-11-23 08:59:06.471258735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:59:06.481510 1243074 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-879861 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 08:59:06.484857 1243074 out.go:179] * Pausing node embed-certs-879861 ... 
	I1123 08:59:06.487675 1243074 host.go:66] Checking if "embed-certs-879861" exists ...
	I1123 08:59:06.488000 1243074 ssh_runner.go:195] Run: systemctl --version
	I1123 08:59:06.488058 1243074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879861
	I1123 08:59:06.518165 1243074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/embed-certs-879861/id_rsa Username:docker}
	I1123 08:59:06.621765 1243074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:59:06.649098 1243074 pause.go:52] kubelet running: true
	I1123 08:59:06.649222 1243074 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:59:06.967680 1243074 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:59:06.967819 1243074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:59:07.057779 1243074 cri.go:89] found id: "974e41dfca4bb5fa762dfa1e5eecade15b2c4b2c22ad82c75a6372877e2740f1"
	I1123 08:59:07.057851 1243074 cri.go:89] found id: "65fa45aa8e6efe637640f88ff9ceb042fd3e516f2413a13626682652b20062b4"
	I1123 08:59:07.057871 1243074 cri.go:89] found id: "2cbf8fb48901c7787acff4b1eea16ad8538ae58630a4f3f48f5f5df71adc621d"
	I1123 08:59:07.057890 1243074 cri.go:89] found id: "29b4b15adaa040ee90f26e40b8ffbe32430ac9644e8116b1b5285cd10d5bca0a"
	I1123 08:59:07.057908 1243074 cri.go:89] found id: "5219876e4c84dfa8e988407b4095b408a9a272dd85d5f216ad25d5cb4fed1fe9"
	I1123 08:59:07.057942 1243074 cri.go:89] found id: "5aa8c8459e4b9c23abe051762e95525327017b8430025151409aa986f851ce46"
	I1123 08:59:07.057957 1243074 cri.go:89] found id: "f36bac59af61132cba19015b450f070860b81feac44898c54358545457989e10"
	I1123 08:59:07.057974 1243074 cri.go:89] found id: "c695f658e9e9e4d1eb46e631dbd8525ddee010d71131bde0f1db699f3f2daa7c"
	I1123 08:59:07.057991 1243074 cri.go:89] found id: "f2d11f1f6548926a628756b6786fd0d702c8dc1b841329fee5f1f0cb5dd84a13"
	I1123 08:59:07.058027 1243074 cri.go:89] found id: "ceeac5fc728e0c53507ccd760fb0549c119ea4b2b3d759bbc85de9f0c282089b"
	I1123 08:59:07.058044 1243074 cri.go:89] found id: "d77f359302a172c9c103fec56f3daf8bd603240bb30346c5c20f8d13be6368bf"
	I1123 08:59:07.058060 1243074 cri.go:89] found id: ""
	I1123 08:59:07.058135 1243074 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:59:07.069526 1243074 retry.go:31] will retry after 212.480438ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:59:07Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:59:07.282968 1243074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:59:07.295771 1243074 pause.go:52] kubelet running: false
	I1123 08:59:07.295848 1243074 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:59:07.508227 1243074 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:59:07.508307 1243074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:59:07.583871 1243074 cri.go:89] found id: "974e41dfca4bb5fa762dfa1e5eecade15b2c4b2c22ad82c75a6372877e2740f1"
	I1123 08:59:07.583895 1243074 cri.go:89] found id: "65fa45aa8e6efe637640f88ff9ceb042fd3e516f2413a13626682652b20062b4"
	I1123 08:59:07.583900 1243074 cri.go:89] found id: "2cbf8fb48901c7787acff4b1eea16ad8538ae58630a4f3f48f5f5df71adc621d"
	I1123 08:59:07.583904 1243074 cri.go:89] found id: "29b4b15adaa040ee90f26e40b8ffbe32430ac9644e8116b1b5285cd10d5bca0a"
	I1123 08:59:07.583907 1243074 cri.go:89] found id: "5219876e4c84dfa8e988407b4095b408a9a272dd85d5f216ad25d5cb4fed1fe9"
	I1123 08:59:07.583910 1243074 cri.go:89] found id: "5aa8c8459e4b9c23abe051762e95525327017b8430025151409aa986f851ce46"
	I1123 08:59:07.583914 1243074 cri.go:89] found id: "f36bac59af61132cba19015b450f070860b81feac44898c54358545457989e10"
	I1123 08:59:07.583917 1243074 cri.go:89] found id: "c695f658e9e9e4d1eb46e631dbd8525ddee010d71131bde0f1db699f3f2daa7c"
	I1123 08:59:07.583920 1243074 cri.go:89] found id: "f2d11f1f6548926a628756b6786fd0d702c8dc1b841329fee5f1f0cb5dd84a13"
	I1123 08:59:07.583926 1243074 cri.go:89] found id: "ceeac5fc728e0c53507ccd760fb0549c119ea4b2b3d759bbc85de9f0c282089b"
	I1123 08:59:07.583929 1243074 cri.go:89] found id: "d77f359302a172c9c103fec56f3daf8bd603240bb30346c5c20f8d13be6368bf"
	I1123 08:59:07.583932 1243074 cri.go:89] found id: ""
	I1123 08:59:07.583992 1243074 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:59:07.596026 1243074 retry.go:31] will retry after 188.95263ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:59:07Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:59:07.785416 1243074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:59:07.805520 1243074 pause.go:52] kubelet running: false
	I1123 08:59:07.805591 1243074 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:59:08.023223 1243074 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:59:08.023310 1243074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:59:08.110441 1243074 cri.go:89] found id: "974e41dfca4bb5fa762dfa1e5eecade15b2c4b2c22ad82c75a6372877e2740f1"
	I1123 08:59:08.110471 1243074 cri.go:89] found id: "65fa45aa8e6efe637640f88ff9ceb042fd3e516f2413a13626682652b20062b4"
	I1123 08:59:08.110477 1243074 cri.go:89] found id: "2cbf8fb48901c7787acff4b1eea16ad8538ae58630a4f3f48f5f5df71adc621d"
	I1123 08:59:08.110481 1243074 cri.go:89] found id: "29b4b15adaa040ee90f26e40b8ffbe32430ac9644e8116b1b5285cd10d5bca0a"
	I1123 08:59:08.110485 1243074 cri.go:89] found id: "5219876e4c84dfa8e988407b4095b408a9a272dd85d5f216ad25d5cb4fed1fe9"
	I1123 08:59:08.110489 1243074 cri.go:89] found id: "5aa8c8459e4b9c23abe051762e95525327017b8430025151409aa986f851ce46"
	I1123 08:59:08.110493 1243074 cri.go:89] found id: "f36bac59af61132cba19015b450f070860b81feac44898c54358545457989e10"
	I1123 08:59:08.110497 1243074 cri.go:89] found id: "c695f658e9e9e4d1eb46e631dbd8525ddee010d71131bde0f1db699f3f2daa7c"
	I1123 08:59:08.110500 1243074 cri.go:89] found id: "f2d11f1f6548926a628756b6786fd0d702c8dc1b841329fee5f1f0cb5dd84a13"
	I1123 08:59:08.110506 1243074 cri.go:89] found id: "ceeac5fc728e0c53507ccd760fb0549c119ea4b2b3d759bbc85de9f0c282089b"
	I1123 08:59:08.110509 1243074 cri.go:89] found id: "d77f359302a172c9c103fec56f3daf8bd603240bb30346c5c20f8d13be6368bf"
	I1123 08:59:08.110513 1243074 cri.go:89] found id: ""
	I1123 08:59:08.110579 1243074 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:59:08.124832 1243074 out.go:203] 
	W1123 08:59:08.127738 1243074 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:59:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:59:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:59:08.127796 1243074 out.go:285] * 
	* 
	W1123 08:59:08.137367 1243074 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:59:08.140332 1243074 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-879861 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-879861
helpers_test.go:243: (dbg) docker inspect embed-certs-879861:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5",
	        "Created": "2025-11-23T08:56:19.024991587Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1236983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:58:01.554798429Z",
	            "FinishedAt": "2025-11-23T08:58:00.706397377Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/hosts",
	        "LogPath": "/var/lib/docker/containers/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5-json.log",
	        "Name": "/embed-certs-879861",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-879861:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-879861",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5",
	                "LowerDir": "/var/lib/docker/overlay2/a3ebc4c752dd4d002b5943db6e5cfab20a769c34737858969bb4d642f4ef53ce-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3ebc4c752dd4d002b5943db6e5cfab20a769c34737858969bb4d642f4ef53ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3ebc4c752dd4d002b5943db6e5cfab20a769c34737858969bb4d642f4ef53ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3ebc4c752dd4d002b5943db6e5cfab20a769c34737858969bb4d642f4ef53ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-879861",
	                "Source": "/var/lib/docker/volumes/embed-certs-879861/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-879861",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-879861",
	                "name.minikube.sigs.k8s.io": "embed-certs-879861",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fddd92661f07dc298a6b82937fe2d81ad80e7e6f10bb08a57756cd1f11978b56",
	            "SandboxKey": "/var/run/docker/netns/fddd92661f07",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34537"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34538"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34541"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34539"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34540"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-879861": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:50:4a:16:b4:d0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "74cdfb3f8ce6a2d207916e4d31bc2aa3571f99fa42bfb2db8c6fa76bac60c37f",
	                    "EndpointID": "487044dd38aba703e70e0cc92f69cf7896194e4731032e6e0e82d7805ac6d7cb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-879861",
	                        "0b83e5e6966d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879861 -n embed-certs-879861
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879861 -n embed-certs-879861: exit status 2 (412.330755ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-879861 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-879861 logs -n 25: (1.570739995s)
E1123 08:59:10.169807 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:55 UTC │
	│ image   │ old-k8s-version-283312 image list --format=json                                                                                                                                                                                               │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ pause   │ -p old-k8s-version-283312 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ delete  │ -p old-k8s-version-283312                                                                                                                                                                                                                     │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ delete  │ -p old-k8s-version-283312                                                                                                                                                                                                                     │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-expiration-322507 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-expiration-322507                                                                                                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-262764 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-262764 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-262764 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable metrics-server -p embed-certs-879861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p embed-certs-879861 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-879861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ image   │ default-k8s-diff-port-262764 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ pause   │ -p default-k8s-diff-port-262764 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p disable-driver-mounts-880590                                                                                                                                                                                                               │ disable-driver-mounts-880590 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	│ image   │ embed-certs-879861 image list --format=json                                                                                                                                                                                                   │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ pause   │ -p embed-certs-879861 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:58:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:58:36.912718 1240463 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:58:36.912855 1240463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:36.912867 1240463 out.go:374] Setting ErrFile to fd 2...
	I1123 08:58:36.912873 1240463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:36.913143 1240463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:58:36.913604 1240463 out.go:368] Setting JSON to false
	I1123 08:58:36.914561 1240463 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34862,"bootTime":1763853455,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:58:36.914625 1240463 start.go:143] virtualization:  
	I1123 08:58:36.918203 1240463 out.go:179] * [no-preload-591175] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:58:36.922182 1240463 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:58:36.922333 1240463 notify.go:221] Checking for updates...
	I1123 08:58:36.928201 1240463 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:58:36.931235 1240463 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:58:36.934248 1240463 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:58:36.937104 1240463 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:58:36.940118 1240463 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:58:36.943679 1240463 config.go:182] Loaded profile config "embed-certs-879861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:36.943819 1240463 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:58:36.980579 1240463 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:58:36.980696 1240463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:58:37.043778 1240463 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:58:37.033866344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:58:37.043877 1240463 docker.go:319] overlay module found
	I1123 08:58:37.047071 1240463 out.go:179] * Using the docker driver based on user configuration
	I1123 08:58:37.049936 1240463 start.go:309] selected driver: docker
	I1123 08:58:37.049958 1240463 start.go:927] validating driver "docker" against <nil>
	I1123 08:58:37.049971 1240463 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:58:37.050724 1240463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:58:37.107821 1240463 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:58:37.099237097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:58:37.107974 1240463 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:58:37.108210 1240463 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:58:37.111161 1240463 out.go:179] * Using Docker driver with root privileges
	I1123 08:58:37.114088 1240463 cni.go:84] Creating CNI manager for ""
	I1123 08:58:37.114159 1240463 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:58:37.114172 1240463 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:58:37.114249 1240463 start.go:353] cluster config:
	{Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:58:37.117268 1240463 out.go:179] * Starting "no-preload-591175" primary control-plane node in "no-preload-591175" cluster
	I1123 08:58:37.120029 1240463 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:58:37.122999 1240463 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:58:37.125909 1240463 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:58:37.125997 1240463 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:58:37.126042 1240463 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/config.json ...
	I1123 08:58:37.126072 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/config.json: {Name:mk3d28f5ab07c5113a556e30e572b648086a95c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:37.126310 1240463 cache.go:107] acquiring lock: {Name:mka2cb35964388564c4a147c0f220dec8bb32f92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.127077 1240463 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 08:58:37.127101 1240463 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 798.632µs
	I1123 08:58:37.127121 1240463 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 08:58:37.127160 1240463 cache.go:107] acquiring lock: {Name:mkfa049396ba1dee12c76864774f3aeacdb25dbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.127336 1240463 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:37.127803 1240463 cache.go:107] acquiring lock: {Name:mked8fbb27666d48a91880577550b6d3c15d46c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.127943 1240463 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:37.128161 1240463 cache.go:107] acquiring lock: {Name:mk78ea502d01db87a3fd0add08c07fa53ee3c177 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.128268 1240463 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:37.128475 1240463 cache.go:107] acquiring lock: {Name:mk8f8894eb123f292e1befe37ca59025bf250750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.128579 1240463 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:37.128781 1240463 cache.go:107] acquiring lock: {Name:mk5d6b1c9a54df439137e5ed9e773e09f1f35c7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.128907 1240463 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1123 08:58:37.129108 1240463 cache.go:107] acquiring lock: {Name:mk24b215fc8a1c4de845c20a5f8cbdfbdd48812c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.129243 1240463 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:37.129482 1240463 cache.go:107] acquiring lock: {Name:mkd443765c9d6bedf54886650c57996d65552ffb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.129614 1240463 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:37.131097 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:37.132230 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:37.132540 1240463 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:37.132664 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:37.132823 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:37.132826 1240463 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:37.132918 1240463 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1123 08:58:37.155394 1240463 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:58:37.155419 1240463 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:58:37.155434 1240463 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:58:37.155491 1240463 start.go:360] acquireMachinesLock for no-preload-591175: {Name:mk29286da1b052dc7b05c36520527aed8159771a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.155598 1240463 start.go:364] duration metric: took 85.839µs to acquireMachinesLock for "no-preload-591175"
	I1123 08:58:37.155627 1240463 start.go:93] Provisioning new machine with config: &{Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:58:37.155706 1240463 start.go:125] createHost starting for "" (driver="docker")
	W1123 08:58:36.952271 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:39.450939 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	I1123 08:58:37.159311 1240463 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:58:37.159548 1240463 start.go:159] libmachine.API.Create for "no-preload-591175" (driver="docker")
	I1123 08:58:37.159584 1240463 client.go:173] LocalClient.Create starting
	I1123 08:58:37.159660 1240463 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem
	I1123 08:58:37.159705 1240463 main.go:143] libmachine: Decoding PEM data...
	I1123 08:58:37.159725 1240463 main.go:143] libmachine: Parsing certificate...
	I1123 08:58:37.159778 1240463 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem
	I1123 08:58:37.159800 1240463 main.go:143] libmachine: Decoding PEM data...
	I1123 08:58:37.159815 1240463 main.go:143] libmachine: Parsing certificate...
	I1123 08:58:37.160203 1240463 cli_runner.go:164] Run: docker network inspect no-preload-591175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:58:37.186702 1240463 cli_runner.go:211] docker network inspect no-preload-591175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:58:37.186858 1240463 network_create.go:284] running [docker network inspect no-preload-591175] to gather additional debugging logs...
	I1123 08:58:37.186879 1240463 cli_runner.go:164] Run: docker network inspect no-preload-591175
	W1123 08:58:37.204942 1240463 cli_runner.go:211] docker network inspect no-preload-591175 returned with exit code 1
	I1123 08:58:37.204968 1240463 network_create.go:287] error running [docker network inspect no-preload-591175]: docker network inspect no-preload-591175: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-591175 not found
	I1123 08:58:37.204980 1240463 network_create.go:289] output of [docker network inspect no-preload-591175]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-591175 not found
	
	** /stderr **
	I1123 08:58:37.205081 1240463 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:58:37.221895 1240463 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-32d396d9f7df IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:9b:29:4a:5c:ab} reservation:<nil>}
	I1123 08:58:37.222185 1240463 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-859c97accd92 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:ea:cf:62:f4:f8} reservation:<nil>}
	I1123 08:58:37.222510 1240463 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-50e966d7b39a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:1d:b6:b9:b9:ef} reservation:<nil>}
	I1123 08:58:37.222907 1240463 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-74cdfb3f8ce6 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:3d:70:42:34:33} reservation:<nil>}
	I1123 08:58:37.223522 1240463 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c76b20}
	I1123 08:58:37.223547 1240463 network_create.go:124] attempt to create docker network no-preload-591175 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 08:58:37.223597 1240463 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-591175 no-preload-591175
	I1123 08:58:37.285948 1240463 network_create.go:108] docker network no-preload-591175 192.168.85.0/24 created
	I1123 08:58:37.285990 1240463 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-591175" container
	I1123 08:58:37.286149 1240463 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:58:37.309222 1240463 cli_runner.go:164] Run: docker volume create no-preload-591175 --label name.minikube.sigs.k8s.io=no-preload-591175 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:58:37.326681 1240463 oci.go:103] Successfully created a docker volume no-preload-591175
	I1123 08:58:37.326773 1240463 cli_runner.go:164] Run: docker run --rm --name no-preload-591175-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-591175 --entrypoint /usr/bin/test -v no-preload-591175:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:58:37.461246 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1123 08:58:37.482587 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1123 08:58:37.493165 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1123 08:58:37.495302 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1123 08:58:37.499393 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1123 08:58:37.504583 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1123 08:58:37.506962 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1123 08:58:37.551861 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1123 08:58:37.551894 1240463 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 423.115774ms
	I1123 08:58:37.551907 1240463 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 08:58:37.918149 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 08:58:37.918221 1240463 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 789.748192ms
	I1123 08:58:37.918247 1240463 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 08:58:38.018899 1240463 oci.go:107] Successfully prepared a docker volume no-preload-591175
	I1123 08:58:38.018942 1240463 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1123 08:58:38.019089 1240463 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:58:38.019458 1240463 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:58:38.082651 1240463 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-591175 --name no-preload-591175 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-591175 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-591175 --network no-preload-591175 --ip 192.168.85.2 --volume no-preload-591175:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:58:38.342225 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 08:58:38.342259 1240463 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.212780291s
	I1123 08:58:38.342277 1240463 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 08:58:38.432482 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 08:58:38.432564 1240463 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.304405439s
	I1123 08:58:38.432591 1240463 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 08:58:38.491905 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 08:58:38.491933 1240463 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.364216602s
	I1123 08:58:38.491945 1240463 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 08:58:38.508193 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Running}}
	I1123 08:58:38.554549 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 08:58:38.573739 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 08:58:38.573764 1240463 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.446624818s
	I1123 08:58:38.573776 1240463 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 08:58:38.598643 1240463 cli_runner.go:164] Run: docker exec no-preload-591175 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:58:38.679006 1240463 oci.go:144] the created container "no-preload-591175" has a running status.
	I1123 08:58:38.679036 1240463 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa...
	I1123 08:58:38.779427 1240463 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:58:38.805453 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 08:58:38.840941 1240463 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:58:38.840964 1240463 kic_runner.go:114] Args: [docker exec --privileged no-preload-591175 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:58:38.904011 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 08:58:38.935826 1240463 machine.go:94] provisionDockerMachine start ...
	I1123 08:58:38.935929 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:39.004922 1240463 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:39.005306 1240463 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1123 08:58:39.005323 1240463 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:58:39.006093 1240463 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46076->127.0.0.1:34542: read: connection reset by peer
	I1123 08:58:39.609969 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 08:58:39.609999 1240463 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.480904808s
	I1123 08:58:39.610011 1240463 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 08:58:39.610047 1240463 cache.go:87] Successfully saved all images to host disk.
	I1123 08:58:42.172157 1240463 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-591175
	
	I1123 08:58:42.172200 1240463 ubuntu.go:182] provisioning hostname "no-preload-591175"
	I1123 08:58:42.172277 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:42.216962 1240463 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:42.217361 1240463 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1123 08:58:42.217389 1240463 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-591175 && echo "no-preload-591175" | sudo tee /etc/hostname
	I1123 08:58:42.389090 1240463 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-591175
	
	I1123 08:58:42.389227 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:42.407473 1240463 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:42.407786 1240463 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1123 08:58:42.407803 1240463 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-591175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-591175/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-591175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:58:42.559497 1240463 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:58:42.559588 1240463 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:58:42.559626 1240463 ubuntu.go:190] setting up certificates
	I1123 08:58:42.559663 1240463 provision.go:84] configureAuth start
	I1123 08:58:42.559747 1240463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-591175
	I1123 08:58:42.578240 1240463 provision.go:143] copyHostCerts
	I1123 08:58:42.578298 1240463 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:58:42.578310 1240463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:58:42.578385 1240463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:58:42.578505 1240463 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:58:42.578510 1240463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:58:42.578536 1240463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:58:42.578592 1240463 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:58:42.578596 1240463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:58:42.578619 1240463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:58:42.578670 1240463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.no-preload-591175 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-591175]
	I1123 08:58:42.811716 1240463 provision.go:177] copyRemoteCerts
	I1123 08:58:42.811812 1240463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:58:42.811879 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:42.841022 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:58:42.947223 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:58:42.970352 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:58:42.988454 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:58:43.007175 1240463 provision.go:87] duration metric: took 447.467439ms to configureAuth
	I1123 08:58:43.007275 1240463 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:58:43.007501 1240463 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:43.007620 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:43.026667 1240463 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:43.026994 1240463 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1123 08:58:43.027012 1240463 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:58:43.423594 1240463 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:58:43.423658 1240463 machine.go:97] duration metric: took 4.487808271s to provisionDockerMachine
	I1123 08:58:43.423682 1240463 client.go:176] duration metric: took 6.264088237s to LocalClient.Create
	I1123 08:58:43.423707 1240463 start.go:167] duration metric: took 6.264160826s to libmachine.API.Create "no-preload-591175"
	I1123 08:58:43.423739 1240463 start.go:293] postStartSetup for "no-preload-591175" (driver="docker")
	I1123 08:58:43.423767 1240463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:58:43.423862 1240463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:58:43.423927 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:43.442025 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:58:43.547251 1240463 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:58:43.550823 1240463 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:58:43.550892 1240463 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:58:43.550917 1240463 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:58:43.550988 1240463 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:58:43.551072 1240463 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:58:43.551228 1240463 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:58:43.558825 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:58:43.577376 1240463 start.go:296] duration metric: took 153.605832ms for postStartSetup
	I1123 08:58:43.577803 1240463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-591175
	I1123 08:58:43.595850 1240463 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/config.json ...
	I1123 08:58:43.596141 1240463 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:58:43.596192 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:43.613717 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:58:43.716325 1240463 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:58:43.721329 1240463 start.go:128] duration metric: took 6.565610133s to createHost
	I1123 08:58:43.721356 1240463 start.go:83] releasing machines lock for "no-preload-591175", held for 6.565743865s
	I1123 08:58:43.721434 1240463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-591175
	I1123 08:58:43.738971 1240463 ssh_runner.go:195] Run: cat /version.json
	I1123 08:58:43.739024 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:43.739082 1240463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:58:43.739146 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:43.757034 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:58:43.757279 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:58:43.862906 1240463 ssh_runner.go:195] Run: systemctl --version
	I1123 08:58:43.957997 1240463 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:58:43.992287 1240463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:58:43.997097 1240463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:58:43.997167 1240463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:58:44.030329 1240463 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:58:44.030351 1240463 start.go:496] detecting cgroup driver to use...
	I1123 08:58:44.030404 1240463 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:58:44.030478 1240463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:58:44.049644 1240463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:58:44.062569 1240463 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:58:44.062671 1240463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:58:44.081468 1240463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:58:44.101459 1240463 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:58:44.227925 1240463 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:58:44.359478 1240463 docker.go:234] disabling docker service ...
	I1123 08:58:44.359549 1240463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:58:44.383227 1240463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:58:44.398566 1240463 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:58:44.538990 1240463 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:58:44.665125 1240463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:58:44.678808 1240463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:58:44.698846 1240463 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:58:44.698928 1240463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.707741 1240463 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:58:44.707869 1240463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.716942 1240463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.728685 1240463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.741978 1240463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:58:44.750885 1240463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.761458 1240463 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.777867 1240463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.787222 1240463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:58:44.794961 1240463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:58:44.802414 1240463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:44.932515 1240463 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:58:45.180737 1240463 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:58:45.180911 1240463 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:58:45.193707 1240463 start.go:564] Will wait 60s for crictl version
	I1123 08:58:45.194042 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.203512 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:58:45.272285 1240463 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:58:45.272420 1240463 ssh_runner.go:195] Run: crio --version
	I1123 08:58:45.312909 1240463 ssh_runner.go:195] Run: crio --version
	I1123 08:58:45.349605 1240463 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1123 08:58:41.451898 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:43.452750 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:45.952903 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	I1123 08:58:45.352604 1240463 cli_runner.go:164] Run: docker network inspect no-preload-591175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:58:45.368196 1240463 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:58:45.371894 1240463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:58:45.383454 1240463 kubeadm.go:884] updating cluster {Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:58:45.383570 1240463 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:58:45.383614 1240463 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:58:45.407452 1240463 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1123 08:58:45.407476 1240463 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1123 08:58:45.407514 1240463 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:45.407714 1240463 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:45.407802 1240463 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:45.407833 1240463 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:45.407942 1240463 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:45.407992 1240463 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:45.408039 1240463 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:45.408127 1240463 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1123 08:58:45.410067 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:45.410331 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:45.410495 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:45.410644 1240463 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:45.410790 1240463 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1123 08:58:45.410932 1240463 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:45.411283 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:45.411537 1240463 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:45.619870 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:45.636228 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:45.636510 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:45.649814 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:45.650002 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1123 08:58:45.656674 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:45.689467 1240463 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1123 08:58:45.689511 1240463 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:45.689559 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.694353 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:45.769880 1240463 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1123 08:58:45.769969 1240463 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:45.770055 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.770123 1240463 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1123 08:58:45.770401 1240463 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:45.770455 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.770171 1240463 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1123 08:58:45.770542 1240463 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1123 08:58:45.770618 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.770261 1240463 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1123 08:58:45.770677 1240463 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:45.770699 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.770308 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:45.770216 1240463 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1123 08:58:45.770730 1240463 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:45.770749 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.840935 1240463 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1123 08:58:45.840998 1240463 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:45.841051 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.864873 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:45.864945 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:45.865001 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:58:45.865051 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:45.865107 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:45.865198 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:45.869026 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:46.001817 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:46.001892 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:46.001939 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:46.001984 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:46.002460 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:46.002534 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:46.002912 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:58:46.116360 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:58:46.116451 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:46.116516 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:46.116578 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:46.116638 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:46.116690 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1123 08:58:46.116756 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:58:46.116817 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:46.193921 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1123 08:58:46.194005 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1123 08:58:46.194163 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1123 08:58:46.194283 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:58:46.216115 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1123 08:58:46.216213 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:58:46.216279 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1123 08:58:46.216324 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:58:46.216367 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1123 08:58:46.216410 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:58:46.216457 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1123 08:58:46.216471 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1123 08:58:46.216510 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1123 08:58:46.216547 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:58:46.216588 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1123 08:58:46.216600 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1123 08:58:46.216632 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1123 08:58:46.216643 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1123 08:58:46.227026 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1123 08:58:46.227076 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1123 08:58:46.261475 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1123 08:58:46.261525 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1123 08:58:46.261590 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1123 08:58:46.261607 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1123 08:58:46.261665 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1123 08:58:46.261681 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1123 08:58:46.286715 1240463 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1123 08:58:46.286799 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1123 08:58:46.297922 1240463 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1123 08:58:46.297965 1240463 retry.go:31] will retry after 136.551449ms: ssh: rejected: connect failed (open failed)
	I1123 08:58:46.434687 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:46.473536 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:58:46.659766 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	W1123 08:58:46.707240 1240463 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1123 08:58:46.707559 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:46.805045 1240463 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:58:46.805117 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1123 08:58:48.451881 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:50.453175 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	I1123 08:58:46.957633 1240463 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1123 08:58:46.957674 1240463 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:46.957731 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:48.624896 1240463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.819750901s)
	I1123 08:58:48.624921 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1123 08:58:48.624937 1240463 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:58:48.624983 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:58:48.625031 1240463 ssh_runner.go:235] Completed: which crictl: (1.667286947s)
	I1123 08:58:48.625054 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:50.324694 1240463 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.699619678s)
	I1123 08:58:50.324766 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:50.324892 1240463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.699900024s)
	I1123 08:58:50.324908 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1123 08:58:50.324925 1240463 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:58:50.324949 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:58:50.352418 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:51.530119 1240463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.205136667s)
	I1123 08:58:51.530145 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1123 08:58:51.530162 1240463 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:58:51.530162 1240463 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.177715447s)
	I1123 08:58:51.530199 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1123 08:58:51.530212 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:58:51.530277 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	W1123 08:58:52.472529 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	I1123 08:58:52.951507 1236855 pod_ready.go:94] pod "coredns-66bc5c9577-r5lt5" is "Ready"
	I1123 08:58:52.951533 1236855 pod_ready.go:86] duration metric: took 37.005532256s for pod "coredns-66bc5c9577-r5lt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:52.954271 1236855 pod_ready.go:83] waiting for pod "etcd-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:52.959897 1236855 pod_ready.go:94] pod "etcd-embed-certs-879861" is "Ready"
	I1123 08:58:52.959925 1236855 pod_ready.go:86] duration metric: took 5.630037ms for pod "etcd-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:52.962737 1236855 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:52.968313 1236855 pod_ready.go:94] pod "kube-apiserver-embed-certs-879861" is "Ready"
	I1123 08:58:52.968382 1236855 pod_ready.go:86] duration metric: took 5.581357ms for pod "kube-apiserver-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:52.971144 1236855 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:53.149572 1236855 pod_ready.go:94] pod "kube-controller-manager-embed-certs-879861" is "Ready"
	I1123 08:58:53.149600 1236855 pod_ready.go:86] duration metric: took 178.385211ms for pod "kube-controller-manager-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:53.349682 1236855 pod_ready.go:83] waiting for pod "kube-proxy-bf5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:53.750199 1236855 pod_ready.go:94] pod "kube-proxy-bf5ck" is "Ready"
	I1123 08:58:53.750224 1236855 pod_ready.go:86] duration metric: took 400.516781ms for pod "kube-proxy-bf5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:53.949108 1236855 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:54.349090 1236855 pod_ready.go:94] pod "kube-scheduler-embed-certs-879861" is "Ready"
	I1123 08:58:54.349120 1236855 pod_ready.go:86] duration metric: took 399.990083ms for pod "kube-scheduler-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:54.349131 1236855 pod_ready.go:40] duration metric: took 38.407011262s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:58:54.415303 1236855 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:58:54.418862 1236855 out.go:179] * Done! kubectl is now configured to use "embed-certs-879861" cluster and "default" namespace by default
	I1123 08:58:52.929997 1240463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.39976243s)
	I1123 08:58:52.930026 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1123 08:58:52.930043 1240463 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:58:52.930041 1240463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.399747135s)
	I1123 08:58:52.930064 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1123 08:58:52.930085 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1123 08:58:52.930093 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:58:54.392283 1240463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.462170298s)
	I1123 08:58:54.392310 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1123 08:58:54.392329 1240463 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:58:54.392375 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:58:58.211727 1240463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.819324837s)
	I1123 08:58:58.211752 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1123 08:58:58.211770 1240463 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:58:58.211821 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:58:58.765786 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1123 08:58:58.765828 1240463 cache_images.go:125] Successfully loaded all cached images
	I1123 08:58:58.765835 1240463 cache_images.go:94] duration metric: took 13.358345309s to LoadCachedImages
	I1123 08:58:58.765846 1240463 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 08:58:58.765931 1240463 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-591175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:58:58.766013 1240463 ssh_runner.go:195] Run: crio config
	I1123 08:58:58.842899 1240463 cni.go:84] Creating CNI manager for ""
	I1123 08:58:58.842919 1240463 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:58:58.842937 1240463 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:58:58.842960 1240463 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-591175 NodeName:no-preload-591175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:58:58.843083 1240463 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-591175"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:58:58.843156 1240463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:58:58.851245 1240463 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1123 08:58:58.851312 1240463 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1123 08:58:58.858773 1240463 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1123 08:58:58.858869 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1123 08:58:58.859328 1240463 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1123 08:58:58.859760 1240463 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1123 08:58:58.862896 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1123 08:58:58.862922 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1123 08:58:59.785752 1240463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:58:59.798943 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1123 08:58:59.802356 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1123 08:58:59.802393 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1123 08:59:00.245994 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1123 08:59:00.263970 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1123 08:59:00.264003 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1123 08:59:00.656078 1240463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:59:00.663566 1240463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 08:59:00.677170 1240463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:59:00.691261 1240463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 08:59:00.704750 1240463 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:59:00.708038 1240463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:00.717399 1240463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:00.837544 1240463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:00.857319 1240463 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175 for IP: 192.168.85.2
	I1123 08:59:00.857340 1240463 certs.go:195] generating shared ca certs ...
	I1123 08:59:00.857356 1240463 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:00.857492 1240463 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:59:00.857540 1240463 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:59:00.857551 1240463 certs.go:257] generating profile certs ...
	I1123 08:59:00.857607 1240463 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.key
	I1123 08:59:00.857623 1240463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt with IP's: []
	I1123 08:59:00.990174 1240463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt ...
	I1123 08:59:00.990204 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: {Name:mkaa4d715caff155fdf8f9316786d20d9ef10f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:00.990434 1240463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.key ...
	I1123 08:59:00.990449 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.key: {Name:mk383de70188bb8a649924aa139520fe91b4660c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:00.990547 1240463 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key.0b835375
	I1123 08:59:00.990567 1240463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.crt.0b835375 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:59:01.360985 1240463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.crt.0b835375 ...
	I1123 08:59:01.361018 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.crt.0b835375: {Name:mk09e048d577c470d6c46750b3088f9a50b07aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:01.361232 1240463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key.0b835375 ...
	I1123 08:59:01.361248 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key.0b835375: {Name:mk4bfc6032c9b270993e51d64472cffe16b701d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:01.361345 1240463 certs.go:382] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.crt.0b835375 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.crt
	I1123 08:59:01.361424 1240463 certs.go:386] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key.0b835375 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key
	I1123 08:59:01.361484 1240463 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.key
	I1123 08:59:01.361501 1240463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.crt with IP's: []
	I1123 08:59:01.415455 1240463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.crt ...
	I1123 08:59:01.415483 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.crt: {Name:mkc4755c9183e83d2edaa4551aab0798e56a8566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:01.415641 1240463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.key ...
	I1123 08:59:01.415656 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.key: {Name:mkd434b226834f082a34f8dd5f7c8fb052327807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:01.415843 1240463 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:59:01.415888 1240463 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:59:01.415897 1240463 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:59:01.415925 1240463 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:59:01.415953 1240463 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:59:01.415982 1240463 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:59:01.416038 1240463 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:59:01.416645 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:59:01.433501 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:59:01.450206 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:59:01.468668 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:59:01.486923 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:59:01.504279 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:59:01.522340 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:59:01.539946 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:59:01.557609 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 08:59:01.576420 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:59:01.593985 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:59:01.610561 1240463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:59:01.627110 1240463 ssh_runner.go:195] Run: openssl version
	I1123 08:59:01.634179 1240463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:59:01.642289 1240463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:59:01.646884 1240463 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:59:01.646997 1240463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:59:01.692144 1240463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:59:01.700122 1240463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:59:01.708127 1240463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:59:01.711628 1240463 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:59:01.711696 1240463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:59:01.752182 1240463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:59:01.760116 1240463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:59:01.767808 1240463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:01.771544 1240463 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:01.771608 1240463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:01.812384 1240463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:59:01.820274 1240463 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:59:01.823462 1240463 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:59:01.823558 1240463 kubeadm.go:401] StartCluster: {Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:01.823656 1240463 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:59:01.823713 1240463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:59:01.862344 1240463 cri.go:89] found id: ""
	I1123 08:59:01.862417 1240463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:59:01.870395 1240463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:59:01.879781 1240463 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:59:01.879854 1240463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:59:01.889097 1240463 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:59:01.889130 1240463 kubeadm.go:158] found existing configuration files:
	
	I1123 08:59:01.889186 1240463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:59:01.896694 1240463 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:59:01.896761 1240463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:59:01.905639 1240463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:59:01.914210 1240463 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:59:01.914285 1240463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:59:01.922061 1240463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:59:01.934091 1240463 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:59:01.934421 1240463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:59:01.943557 1240463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:59:01.951273 1240463 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:59:01.951343 1240463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:59:01.960342 1240463 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:59:02.046910 1240463 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 08:59:02.047147 1240463 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:59:02.128134 1240463 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Nov 23 08:58:41 embed-certs-879861 crio[655]: time="2025-11-23T08:58:41.780083431Z" level=info msg="Removed container 1b56c929b4cb25bb33bd7d63801d893f91da780d378582ed5211467964815e1a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld/dashboard-metrics-scraper" id=db170f73-b2fa-430c-9f8d-58cc9deee745 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:58:45 embed-certs-879861 conmon[1148]: conmon 5219876e4c84dfa8e988 <ninfo>: container 1151 exited with status 1
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.777907157Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bda1e349-e566-41b1-8676-bb4cfe8cbfc5 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.779002781Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=699dc304-dadf-40c2-ac16-ee3cd356781b name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.780182359Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=478d7d73-c0ad-465e-a12d-8c9ea310d362 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.780411045Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.785667686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.785975976Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5ca009394572ae2093a39acb297fe0dd6e31ade8522d79cac43b2dd2a18aaf8b/merged/etc/passwd: no such file or directory"
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.786087662Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5ca009394572ae2093a39acb297fe0dd6e31ade8522d79cac43b2dd2a18aaf8b/merged/etc/group: no such file or directory"
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.786511174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.842147709Z" level=info msg="Created container 974e41dfca4bb5fa762dfa1e5eecade15b2c4b2c22ad82c75a6372877e2740f1: kube-system/storage-provisioner/storage-provisioner" id=478d7d73-c0ad-465e-a12d-8c9ea310d362 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.843059626Z" level=info msg="Starting container: 974e41dfca4bb5fa762dfa1e5eecade15b2c4b2c22ad82c75a6372877e2740f1" id=5f14ab10-2eb6-4ab8-a747-381769320ca8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.844662305Z" level=info msg="Started container" PID=1643 containerID=974e41dfca4bb5fa762dfa1e5eecade15b2c4b2c22ad82c75a6372877e2740f1 description=kube-system/storage-provisioner/storage-provisioner id=5f14ab10-2eb6-4ab8-a747-381769320ca8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=84c1ec4cfc6a7a737101537ddf08b02fdd46566d1ed7589f7e8bba1cccaf0282
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.631583677Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.637325801Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.637475458Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.637562413Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.640707884Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.640866124Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.641858014Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.645015857Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.645144116Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.645219889Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.64825534Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.648361249Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	974e41dfca4bb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   84c1ec4cfc6a7       storage-provisioner                          kube-system
	ceeac5fc728e0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago       Exited              dashboard-metrics-scraper   2                   471bf0a0aed6c       dashboard-metrics-scraper-6ffb444bf9-26pld   kubernetes-dashboard
	d77f359302a17       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   44a69aa60535e       kubernetes-dashboard-855c9754f9-ld9hg        kubernetes-dashboard
	f4c684e5efb0e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   caebf061c8497       busybox                                      default
	65fa45aa8e6ef       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   cbe8ee4456df7       coredns-66bc5c9577-r5lt5                     kube-system
	2cbf8fb48901c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   5e59088b090ed       kindnet-f6j8g                                kube-system
	29b4b15adaa04       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   5181f0e2e8c01       kube-proxy-bf5ck                             kube-system
	5219876e4c84d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   84c1ec4cfc6a7       storage-provisioner                          kube-system
	5aa8c8459e4b9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   42fdda0f8c8b4       kube-scheduler-embed-certs-879861            kube-system
	f36bac59af611       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   ed58ed9cd187d       kube-apiserver-embed-certs-879861            kube-system
	c695f658e9e9e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   1b1235f16aafb       kube-controller-manager-embed-certs-879861   kube-system
	f2d11f1f65489       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2d92e1327f47a       etcd-embed-certs-879861                      kube-system
	
	
	==> coredns [65fa45aa8e6efe637640f88ff9ceb042fd3e516f2413a13626682652b20062b4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55447 - 57190 "HINFO IN 1215072340600001780.6500194520348239823. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022309463s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-879861
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-879861
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=embed-certs-879861
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_56_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:56:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-879861
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:59:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:58:44 +0000   Sun, 23 Nov 2025 08:56:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:58:44 +0000   Sun, 23 Nov 2025 08:56:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:58:44 +0000   Sun, 23 Nov 2025 08:56:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:58:44 +0000   Sun, 23 Nov 2025 08:57:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-879861
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                1503fdcb-cc7b-4ade-b29c-e34b53c3598b
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-r5lt5                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-embed-certs-879861                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-f6j8g                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-embed-certs-879861             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-embed-certs-879861    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-bf5ck                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-embed-certs-879861             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-26pld    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ld9hg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m17s              kube-proxy       
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m24s              kubelet          Node embed-certs-879861 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m24s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m24s              kubelet          Node embed-certs-879861 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m24s              kubelet          Node embed-certs-879861 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m24s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s              node-controller  Node embed-certs-879861 event: Registered Node embed-certs-879861 in Controller
	  Normal   NodeReady                97s                kubelet          Node embed-certs-879861 status is now: NodeReady
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node embed-certs-879861 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node embed-certs-879861 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node embed-certs-879861 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                node-controller  Node embed-certs-879861 event: Registered Node embed-certs-879861 in Controller
	
	
	==> dmesg <==
	[Nov23 08:36] overlayfs: idmapped layers are currently not supported
	[Nov23 08:37] overlayfs: idmapped layers are currently not supported
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	[Nov23 08:55] overlayfs: idmapped layers are currently not supported
	[Nov23 08:56] overlayfs: idmapped layers are currently not supported
	[Nov23 08:57] overlayfs: idmapped layers are currently not supported
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	[ +37.440319] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f2d11f1f6548926a628756b6786fd0d702c8dc1b841329fee5f1f0cb5dd84a13] <==
	{"level":"warn","ts":"2025-11-23T08:58:12.181669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.221549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.259230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.278554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.294099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.314313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.329826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.342964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.414400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.428991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.431083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.444029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.461674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.477775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.500638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.525267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.545153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.560676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.583734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.596745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.624971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.644379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.672840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.691052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.828980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38560","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:59:09 up  9:41,  0 user,  load average: 3.35, 3.20, 2.73
	Linux embed-certs-879861 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2cbf8fb48901c7787acff4b1eea16ad8538ae58630a4f3f48f5f5df71adc621d] <==
	I1123 08:58:15.428388       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:58:15.429345       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:58:15.429591       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:58:15.429604       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:58:15.429615       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:58:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:58:15.631375       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:58:15.631444       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:58:15.631480       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:58:15.632233       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:58:45.632122       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:58:45.632188       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:58:45.632295       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:58:45.632442       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1123 08:58:46.931636       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:58:46.931730       1 metrics.go:72] Registering metrics
	I1123 08:58:46.931813       1 controller.go:711] "Syncing nftables rules"
	I1123 08:58:55.631246       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:58:55.631319       1 main.go:301] handling current node
	I1123 08:59:05.639271       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:59:05.639386       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f36bac59af61132cba19015b450f070860b81feac44898c54358545457989e10] <==
	I1123 08:58:14.008203       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 08:58:14.008335       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:58:14.021077       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:58:14.039290       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 08:58:14.039323       1 policy_source.go:240] refreshing policies
	E1123 08:58:14.043389       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:58:14.050805       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:58:14.087306       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 08:58:14.087460       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 08:58:14.087698       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:58:14.095065       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:58:14.096570       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 08:58:14.105067       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 08:58:14.109887       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:58:14.590131       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:58:14.672819       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:58:15.413842       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:58:15.540452       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:58:15.588337       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:58:15.602799       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:58:15.843623       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.74.10"}
	I1123 08:58:15.882823       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.124.197"}
	I1123 08:58:17.526247       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:58:17.773636       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:58:17.873506       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c695f658e9e9e4d1eb46e631dbd8525ddee010d71131bde0f1db699f3f2daa7c] <==
	I1123 08:58:17.317767       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:58:17.317782       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:58:17.317793       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:58:17.318897       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:58:17.320037       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 08:58:17.321177       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:58:17.321246       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:58:17.324529       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:58:17.324542       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:58:17.324637       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:58:17.324694       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:58:17.324725       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:58:17.324753       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:58:17.329211       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:58:17.330285       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:58:17.330290       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:58:17.333482       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:58:17.335702       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:58:17.348974       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:58:17.366307       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:58:17.366436       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:58:17.366555       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:58:17.366648       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-879861"
	I1123 08:58:17.366710       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 08:58:17.367287       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [29b4b15adaa040ee90f26e40b8ffbe32430ac9644e8116b1b5285cd10d5bca0a] <==
	I1123 08:58:15.543002       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:58:15.660444       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:58:15.783305       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:58:15.783352       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 08:58:15.783452       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:58:15.832155       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:58:15.832283       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:58:15.837131       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:58:15.837477       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:58:15.837489       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:58:15.838625       1 config.go:200] "Starting service config controller"
	I1123 08:58:15.838683       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:58:15.844837       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:58:15.844897       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:58:15.844959       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:58:15.845003       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:58:15.845604       1 config.go:309] "Starting node config controller"
	I1123 08:58:15.845651       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:58:15.845679       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:58:15.941233       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:58:15.946019       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:58:15.947596       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5aa8c8459e4b9c23abe051762e95525327017b8430025151409aa986f851ce46] <==
	I1123 08:58:12.515434       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:58:14.305083       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:58:14.305111       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:58:14.310773       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:58:14.310865       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:58:14.310891       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:58:14.310916       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:58:14.313294       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:58:14.313309       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:58:14.314467       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:58:14.314478       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:58:14.411162       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 08:58:14.415293       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:58:14.415300       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:58:18 embed-certs-879861 kubelet[787]: I1123 08:58:18.093975     787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz82c\" (UniqueName: \"kubernetes.io/projected/d3f15842-9da9-4d8d-ae2b-dadc7e55e00a-kube-api-access-wz82c\") pod \"kubernetes-dashboard-855c9754f9-ld9hg\" (UID: \"d3f15842-9da9-4d8d-ae2b-dadc7e55e00a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ld9hg"
	Nov 23 08:58:18 embed-certs-879861 kubelet[787]: W1123 08:58:18.317823     787 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/crio-471bf0a0aed6ce1960c5e9dadff486fc051b601f9090f4856a7cb70ac17d50a7 WatchSource:0}: Error finding container 471bf0a0aed6ce1960c5e9dadff486fc051b601f9090f4856a7cb70ac17d50a7: Status 404 returned error can't find the container with id 471bf0a0aed6ce1960c5e9dadff486fc051b601f9090f4856a7cb70ac17d50a7
	Nov 23 08:58:18 embed-certs-879861 kubelet[787]: W1123 08:58:18.325178     787 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/crio-44a69aa60535ec3b554fbbc3efc5b5d40d48e5fb780551673ff02c8fa0fbc01f WatchSource:0}: Error finding container 44a69aa60535ec3b554fbbc3efc5b5d40d48e5fb780551673ff02c8fa0fbc01f: Status 404 returned error can't find the container with id 44a69aa60535ec3b554fbbc3efc5b5d40d48e5fb780551673ff02c8fa0fbc01f
	Nov 23 08:58:22 embed-certs-879861 kubelet[787]: I1123 08:58:22.436521     787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 08:58:22 embed-certs-879861 kubelet[787]: I1123 08:58:22.713264     787 scope.go:117] "RemoveContainer" containerID="a72be53909da8b167dca8d8b5b6b81f55aae1832ad500cbd07a987f5bf988961"
	Nov 23 08:58:23 embed-certs-879861 kubelet[787]: I1123 08:58:23.715610     787 scope.go:117] "RemoveContainer" containerID="1b56c929b4cb25bb33bd7d63801d893f91da780d378582ed5211467964815e1a"
	Nov 23 08:58:23 embed-certs-879861 kubelet[787]: E1123 08:58:23.715777     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-26pld_kubernetes-dashboard(dc44a6a1-381f-4ba1-a950-7b4da68f100d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld" podUID="dc44a6a1-381f-4ba1-a950-7b4da68f100d"
	Nov 23 08:58:23 embed-certs-879861 kubelet[787]: I1123 08:58:23.719675     787 scope.go:117] "RemoveContainer" containerID="a72be53909da8b167dca8d8b5b6b81f55aae1832ad500cbd07a987f5bf988961"
	Nov 23 08:58:24 embed-certs-879861 kubelet[787]: I1123 08:58:24.719055     787 scope.go:117] "RemoveContainer" containerID="1b56c929b4cb25bb33bd7d63801d893f91da780d378582ed5211467964815e1a"
	Nov 23 08:58:24 embed-certs-879861 kubelet[787]: E1123 08:58:24.723408     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-26pld_kubernetes-dashboard(dc44a6a1-381f-4ba1-a950-7b4da68f100d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld" podUID="dc44a6a1-381f-4ba1-a950-7b4da68f100d"
	Nov 23 08:58:27 embed-certs-879861 kubelet[787]: I1123 08:58:27.747481     787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ld9hg" podStartSLOduration=2.1121935 podStartE2EDuration="10.740358241s" podCreationTimestamp="2025-11-23 08:58:17 +0000 UTC" firstStartedPulling="2025-11-23 08:58:18.327973893 +0000 UTC m=+9.976815169" lastFinishedPulling="2025-11-23 08:58:26.956138634 +0000 UTC m=+18.604979910" observedRunningTime="2025-11-23 08:58:27.739637474 +0000 UTC m=+19.388478881" watchObservedRunningTime="2025-11-23 08:58:27.740358241 +0000 UTC m=+19.389199509"
	Nov 23 08:58:28 embed-certs-879861 kubelet[787]: I1123 08:58:28.727566     787 scope.go:117] "RemoveContainer" containerID="1b56c929b4cb25bb33bd7d63801d893f91da780d378582ed5211467964815e1a"
	Nov 23 08:58:28 embed-certs-879861 kubelet[787]: E1123 08:58:28.727785     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-26pld_kubernetes-dashboard(dc44a6a1-381f-4ba1-a950-7b4da68f100d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld" podUID="dc44a6a1-381f-4ba1-a950-7b4da68f100d"
	Nov 23 08:58:41 embed-certs-879861 kubelet[787]: I1123 08:58:41.623601     787 scope.go:117] "RemoveContainer" containerID="1b56c929b4cb25bb33bd7d63801d893f91da780d378582ed5211467964815e1a"
	Nov 23 08:58:41 embed-certs-879861 kubelet[787]: I1123 08:58:41.762872     787 scope.go:117] "RemoveContainer" containerID="1b56c929b4cb25bb33bd7d63801d893f91da780d378582ed5211467964815e1a"
	Nov 23 08:58:41 embed-certs-879861 kubelet[787]: I1123 08:58:41.763152     787 scope.go:117] "RemoveContainer" containerID="ceeac5fc728e0c53507ccd760fb0549c119ea4b2b3d759bbc85de9f0c282089b"
	Nov 23 08:58:41 embed-certs-879861 kubelet[787]: E1123 08:58:41.763397     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-26pld_kubernetes-dashboard(dc44a6a1-381f-4ba1-a950-7b4da68f100d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld" podUID="dc44a6a1-381f-4ba1-a950-7b4da68f100d"
	Nov 23 08:58:45 embed-certs-879861 kubelet[787]: I1123 08:58:45.777310     787 scope.go:117] "RemoveContainer" containerID="5219876e4c84dfa8e988407b4095b408a9a272dd85d5f216ad25d5cb4fed1fe9"
	Nov 23 08:58:48 embed-certs-879861 kubelet[787]: I1123 08:58:48.727717     787 scope.go:117] "RemoveContainer" containerID="ceeac5fc728e0c53507ccd760fb0549c119ea4b2b3d759bbc85de9f0c282089b"
	Nov 23 08:58:48 embed-certs-879861 kubelet[787]: E1123 08:58:48.727891     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-26pld_kubernetes-dashboard(dc44a6a1-381f-4ba1-a950-7b4da68f100d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld" podUID="dc44a6a1-381f-4ba1-a950-7b4da68f100d"
	Nov 23 08:58:59 embed-certs-879861 kubelet[787]: I1123 08:58:59.623874     787 scope.go:117] "RemoveContainer" containerID="ceeac5fc728e0c53507ccd760fb0549c119ea4b2b3d759bbc85de9f0c282089b"
	Nov 23 08:58:59 embed-certs-879861 kubelet[787]: E1123 08:58:59.624096     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-26pld_kubernetes-dashboard(dc44a6a1-381f-4ba1-a950-7b4da68f100d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld" podUID="dc44a6a1-381f-4ba1-a950-7b4da68f100d"
	Nov 23 08:59:06 embed-certs-879861 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:59:06 embed-certs-879861 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:59:06 embed-certs-879861 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d77f359302a172c9c103fec56f3daf8bd603240bb30346c5c20f8d13be6368bf] <==
	2025/11/23 08:58:27 Using namespace: kubernetes-dashboard
	2025/11/23 08:58:27 Using in-cluster config to connect to apiserver
	2025/11/23 08:58:27 Using secret token for csrf signing
	2025/11/23 08:58:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:58:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:58:27 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 08:58:27 Generating JWE encryption key
	2025/11/23 08:58:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:58:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:58:27 Initializing JWE encryption key from synchronized object
	2025/11/23 08:58:27 Creating in-cluster Sidecar client
	2025/11/23 08:58:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:58:27 Serving insecurely on HTTP port: 9090
	2025/11/23 08:58:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:58:27 Starting overwatch
	
	
	==> storage-provisioner [5219876e4c84dfa8e988407b4095b408a9a272dd85d5f216ad25d5cb4fed1fe9] <==
	I1123 08:58:15.494362       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:58:45.511001       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [974e41dfca4bb5fa762dfa1e5eecade15b2c4b2c22ad82c75a6372877e2740f1] <==
	I1123 08:58:45.885730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:58:45.901539       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:58:45.901652       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:58:45.910956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:49.366809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:53.627699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:57.227775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:00.287543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:03.318140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:03.327393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:59:03.327604       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:59:03.329826       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-879861_8a895dce-dda4-4d31-a9f2-00e1c29552ac!
	I1123 08:59:03.337083       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"862d7238-c68b-409a-ac2b-154a7a322a6b", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-879861_8a895dce-dda4-4d31-a9f2-00e1c29552ac became leader
	W1123 08:59:03.340139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:03.348625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:59:03.430252       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-879861_8a895dce-dda4-4d31-a9f2-00e1c29552ac!
	W1123 08:59:05.352431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:05.361626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:07.371431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:07.384536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:09.387321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:09.406237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-879861 -n embed-certs-879861
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-879861 -n embed-certs-879861: exit status 2 (585.455217ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-879861 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-879861
helpers_test.go:243: (dbg) docker inspect embed-certs-879861:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5",
	        "Created": "2025-11-23T08:56:19.024991587Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1236983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:58:01.554798429Z",
	            "FinishedAt": "2025-11-23T08:58:00.706397377Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/hostname",
	        "HostsPath": "/var/lib/docker/containers/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/hosts",
	        "LogPath": "/var/lib/docker/containers/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5-json.log",
	        "Name": "/embed-certs-879861",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-879861:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-879861",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5",
	                "LowerDir": "/var/lib/docker/overlay2/a3ebc4c752dd4d002b5943db6e5cfab20a769c34737858969bb4d642f4ef53ce-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3ebc4c752dd4d002b5943db6e5cfab20a769c34737858969bb4d642f4ef53ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3ebc4c752dd4d002b5943db6e5cfab20a769c34737858969bb4d642f4ef53ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3ebc4c752dd4d002b5943db6e5cfab20a769c34737858969bb4d642f4ef53ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-879861",
	                "Source": "/var/lib/docker/volumes/embed-certs-879861/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-879861",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-879861",
	                "name.minikube.sigs.k8s.io": "embed-certs-879861",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fddd92661f07dc298a6b82937fe2d81ad80e7e6f10bb08a57756cd1f11978b56",
	            "SandboxKey": "/var/run/docker/netns/fddd92661f07",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34537"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34538"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34541"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34539"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34540"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-879861": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:50:4a:16:b4:d0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "74cdfb3f8ce6a2d207916e4d31bc2aa3571f99fa42bfb2db8c6fa76bac60c37f",
	                    "EndpointID": "487044dd38aba703e70e0cc92f69cf7896194e4731032e6e0e82d7805ac6d7cb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-879861",
	                        "0b83e5e6966d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879861 -n embed-certs-879861
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879861 -n embed-certs-879861: exit status 2 (429.628564ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-879861 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-879861 logs -n 25: (1.583975061s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:54 UTC │ 23 Nov 25 08:55 UTC │
	│ image   │ old-k8s-version-283312 image list --format=json                                                                                                                                                                                               │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ pause   │ -p old-k8s-version-283312 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ delete  │ -p old-k8s-version-283312                                                                                                                                                                                                                     │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ delete  │ -p old-k8s-version-283312                                                                                                                                                                                                                     │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-expiration-322507 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-expiration-322507                                                                                                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-262764 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-262764 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-262764 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable metrics-server -p embed-certs-879861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p embed-certs-879861 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-879861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ image   │ default-k8s-diff-port-262764 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ pause   │ -p default-k8s-diff-port-262764 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p disable-driver-mounts-880590                                                                                                                                                                                                               │ disable-driver-mounts-880590 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	│ image   │ embed-certs-879861 image list --format=json                                                                                                                                                                                                   │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ pause   │ -p embed-certs-879861 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:58:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:58:36.912718 1240463 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:58:36.912855 1240463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:36.912867 1240463 out.go:374] Setting ErrFile to fd 2...
	I1123 08:58:36.912873 1240463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:36.913143 1240463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:58:36.913604 1240463 out.go:368] Setting JSON to false
	I1123 08:58:36.914561 1240463 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34862,"bootTime":1763853455,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:58:36.914625 1240463 start.go:143] virtualization:  
	I1123 08:58:36.918203 1240463 out.go:179] * [no-preload-591175] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:58:36.922182 1240463 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:58:36.922333 1240463 notify.go:221] Checking for updates...
	I1123 08:58:36.928201 1240463 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:58:36.931235 1240463 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:58:36.934248 1240463 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:58:36.937104 1240463 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:58:36.940118 1240463 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:58:36.943679 1240463 config.go:182] Loaded profile config "embed-certs-879861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:36.943819 1240463 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:58:36.980579 1240463 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:58:36.980696 1240463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:58:37.043778 1240463 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:58:37.033866344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:58:37.043877 1240463 docker.go:319] overlay module found
	I1123 08:58:37.047071 1240463 out.go:179] * Using the docker driver based on user configuration
	I1123 08:58:37.049936 1240463 start.go:309] selected driver: docker
	I1123 08:58:37.049958 1240463 start.go:927] validating driver "docker" against <nil>
	I1123 08:58:37.049971 1240463 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:58:37.050724 1240463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:58:37.107821 1240463 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:58:37.099237097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:58:37.107974 1240463 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:58:37.108210 1240463 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:58:37.111161 1240463 out.go:179] * Using Docker driver with root privileges
	I1123 08:58:37.114088 1240463 cni.go:84] Creating CNI manager for ""
	I1123 08:58:37.114159 1240463 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:58:37.114172 1240463 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:58:37.114249 1240463 start.go:353] cluster config:
	{Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:58:37.117268 1240463 out.go:179] * Starting "no-preload-591175" primary control-plane node in "no-preload-591175" cluster
	I1123 08:58:37.120029 1240463 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:58:37.122999 1240463 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:58:37.125909 1240463 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:58:37.125997 1240463 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:58:37.126042 1240463 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/config.json ...
	I1123 08:58:37.126072 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/config.json: {Name:mk3d28f5ab07c5113a556e30e572b648086a95c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:37.126310 1240463 cache.go:107] acquiring lock: {Name:mka2cb35964388564c4a147c0f220dec8bb32f92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.127077 1240463 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 08:58:37.127101 1240463 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 798.632µs
	I1123 08:58:37.127121 1240463 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 08:58:37.127160 1240463 cache.go:107] acquiring lock: {Name:mkfa049396ba1dee12c76864774f3aeacdb25dbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.127336 1240463 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:37.127803 1240463 cache.go:107] acquiring lock: {Name:mked8fbb27666d48a91880577550b6d3c15d46c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.127943 1240463 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:37.128161 1240463 cache.go:107] acquiring lock: {Name:mk78ea502d01db87a3fd0add08c07fa53ee3c177 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.128268 1240463 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:37.128475 1240463 cache.go:107] acquiring lock: {Name:mk8f8894eb123f292e1befe37ca59025bf250750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.128579 1240463 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:37.128781 1240463 cache.go:107] acquiring lock: {Name:mk5d6b1c9a54df439137e5ed9e773e09f1f35c7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.128907 1240463 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1123 08:58:37.129108 1240463 cache.go:107] acquiring lock: {Name:mk24b215fc8a1c4de845c20a5f8cbdfbdd48812c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.129243 1240463 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:37.129482 1240463 cache.go:107] acquiring lock: {Name:mkd443765c9d6bedf54886650c57996d65552ffb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.129614 1240463 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:37.131097 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:37.132230 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:37.132540 1240463 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:37.132664 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:37.132823 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:37.132826 1240463 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:37.132918 1240463 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1123 08:58:37.155394 1240463 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:58:37.155419 1240463 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:58:37.155434 1240463 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:58:37.155491 1240463 start.go:360] acquireMachinesLock for no-preload-591175: {Name:mk29286da1b052dc7b05c36520527aed8159771a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:58:37.155598 1240463 start.go:364] duration metric: took 85.839µs to acquireMachinesLock for "no-preload-591175"
	I1123 08:58:37.155627 1240463 start.go:93] Provisioning new machine with config: &{Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:58:37.155706 1240463 start.go:125] createHost starting for "" (driver="docker")
	W1123 08:58:36.952271 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:39.450939 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	I1123 08:58:37.159311 1240463 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:58:37.159548 1240463 start.go:159] libmachine.API.Create for "no-preload-591175" (driver="docker")
	I1123 08:58:37.159584 1240463 client.go:173] LocalClient.Create starting
	I1123 08:58:37.159660 1240463 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem
	I1123 08:58:37.159705 1240463 main.go:143] libmachine: Decoding PEM data...
	I1123 08:58:37.159725 1240463 main.go:143] libmachine: Parsing certificate...
	I1123 08:58:37.159778 1240463 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem
	I1123 08:58:37.159800 1240463 main.go:143] libmachine: Decoding PEM data...
	I1123 08:58:37.159815 1240463 main.go:143] libmachine: Parsing certificate...
	I1123 08:58:37.160203 1240463 cli_runner.go:164] Run: docker network inspect no-preload-591175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:58:37.186702 1240463 cli_runner.go:211] docker network inspect no-preload-591175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:58:37.186858 1240463 network_create.go:284] running [docker network inspect no-preload-591175] to gather additional debugging logs...
	I1123 08:58:37.186879 1240463 cli_runner.go:164] Run: docker network inspect no-preload-591175
	W1123 08:58:37.204942 1240463 cli_runner.go:211] docker network inspect no-preload-591175 returned with exit code 1
	I1123 08:58:37.204968 1240463 network_create.go:287] error running [docker network inspect no-preload-591175]: docker network inspect no-preload-591175: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-591175 not found
	I1123 08:58:37.204980 1240463 network_create.go:289] output of [docker network inspect no-preload-591175]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-591175 not found
	
	** /stderr **
	I1123 08:58:37.205081 1240463 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:58:37.221895 1240463 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-32d396d9f7df IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:9b:29:4a:5c:ab} reservation:<nil>}
	I1123 08:58:37.222185 1240463 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-859c97accd92 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:ea:cf:62:f4:f8} reservation:<nil>}
	I1123 08:58:37.222510 1240463 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-50e966d7b39a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:1d:b6:b9:b9:ef} reservation:<nil>}
	I1123 08:58:37.222907 1240463 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-74cdfb3f8ce6 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:3d:70:42:34:33} reservation:<nil>}
	I1123 08:58:37.223522 1240463 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c76b20}
	I1123 08:58:37.223547 1240463 network_create.go:124] attempt to create docker network no-preload-591175 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 08:58:37.223597 1240463 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-591175 no-preload-591175
	I1123 08:58:37.285948 1240463 network_create.go:108] docker network no-preload-591175 192.168.85.0/24 created
	I1123 08:58:37.285990 1240463 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-591175" container
	I1123 08:58:37.286149 1240463 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:58:37.309222 1240463 cli_runner.go:164] Run: docker volume create no-preload-591175 --label name.minikube.sigs.k8s.io=no-preload-591175 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:58:37.326681 1240463 oci.go:103] Successfully created a docker volume no-preload-591175
	I1123 08:58:37.326773 1240463 cli_runner.go:164] Run: docker run --rm --name no-preload-591175-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-591175 --entrypoint /usr/bin/test -v no-preload-591175:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:58:37.461246 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1123 08:58:37.482587 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1123 08:58:37.493165 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1123 08:58:37.495302 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1123 08:58:37.499393 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1123 08:58:37.504583 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1123 08:58:37.506962 1240463 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1123 08:58:37.551861 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1123 08:58:37.551894 1240463 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 423.115774ms
	I1123 08:58:37.551907 1240463 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 08:58:37.918149 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 08:58:37.918221 1240463 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 789.748192ms
	I1123 08:58:37.918247 1240463 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 08:58:38.018899 1240463 oci.go:107] Successfully prepared a docker volume no-preload-591175
	I1123 08:58:38.018942 1240463 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1123 08:58:38.019089 1240463 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:58:38.019458 1240463 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:58:38.082651 1240463 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-591175 --name no-preload-591175 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-591175 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-591175 --network no-preload-591175 --ip 192.168.85.2 --volume no-preload-591175:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:58:38.342225 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 08:58:38.342259 1240463 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.212780291s
	I1123 08:58:38.342277 1240463 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 08:58:38.432482 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 08:58:38.432564 1240463 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.304405439s
	I1123 08:58:38.432591 1240463 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 08:58:38.491905 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 08:58:38.491933 1240463 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.364216602s
	I1123 08:58:38.491945 1240463 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 08:58:38.508193 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Running}}
	I1123 08:58:38.554549 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 08:58:38.573739 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 08:58:38.573764 1240463 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.446624818s
	I1123 08:58:38.573776 1240463 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 08:58:38.598643 1240463 cli_runner.go:164] Run: docker exec no-preload-591175 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:58:38.679006 1240463 oci.go:144] the created container "no-preload-591175" has a running status.
	I1123 08:58:38.679036 1240463 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa...
	I1123 08:58:38.779427 1240463 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:58:38.805453 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 08:58:38.840941 1240463 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:58:38.840964 1240463 kic_runner.go:114] Args: [docker exec --privileged no-preload-591175 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:58:38.904011 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 08:58:38.935826 1240463 machine.go:94] provisionDockerMachine start ...
	I1123 08:58:38.935929 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:39.004922 1240463 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:39.005306 1240463 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1123 08:58:39.005323 1240463 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:58:39.006093 1240463 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46076->127.0.0.1:34542: read: connection reset by peer
	I1123 08:58:39.609969 1240463 cache.go:157] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 08:58:39.609999 1240463 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.480904808s
	I1123 08:58:39.610011 1240463 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 08:58:39.610047 1240463 cache.go:87] Successfully saved all images to host disk.
	I1123 08:58:42.172157 1240463 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-591175
	
	I1123 08:58:42.172200 1240463 ubuntu.go:182] provisioning hostname "no-preload-591175"
	I1123 08:58:42.172277 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:42.216962 1240463 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:42.217361 1240463 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1123 08:58:42.217389 1240463 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-591175 && echo "no-preload-591175" | sudo tee /etc/hostname
	I1123 08:58:42.389090 1240463 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-591175
	
	I1123 08:58:42.389227 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:42.407473 1240463 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:42.407786 1240463 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1123 08:58:42.407803 1240463 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-591175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-591175/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-591175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:58:42.559497 1240463 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:58:42.559588 1240463 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:58:42.559626 1240463 ubuntu.go:190] setting up certificates
	I1123 08:58:42.559663 1240463 provision.go:84] configureAuth start
	I1123 08:58:42.559747 1240463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-591175
	I1123 08:58:42.578240 1240463 provision.go:143] copyHostCerts
	I1123 08:58:42.578298 1240463 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:58:42.578310 1240463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:58:42.578385 1240463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:58:42.578505 1240463 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:58:42.578510 1240463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:58:42.578536 1240463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:58:42.578592 1240463 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:58:42.578596 1240463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:58:42.578619 1240463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:58:42.578670 1240463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.no-preload-591175 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-591175]
	I1123 08:58:42.811716 1240463 provision.go:177] copyRemoteCerts
	I1123 08:58:42.811812 1240463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:58:42.811879 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:42.841022 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:58:42.947223 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:58:42.970352 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:58:42.988454 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:58:43.007175 1240463 provision.go:87] duration metric: took 447.467439ms to configureAuth
	I1123 08:58:43.007275 1240463 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:58:43.007501 1240463 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:43.007620 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:43.026667 1240463 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:43.026994 1240463 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1123 08:58:43.027012 1240463 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:58:43.423594 1240463 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:58:43.423658 1240463 machine.go:97] duration metric: took 4.487808271s to provisionDockerMachine
	I1123 08:58:43.423682 1240463 client.go:176] duration metric: took 6.264088237s to LocalClient.Create
	I1123 08:58:43.423707 1240463 start.go:167] duration metric: took 6.264160826s to libmachine.API.Create "no-preload-591175"
	I1123 08:58:43.423739 1240463 start.go:293] postStartSetup for "no-preload-591175" (driver="docker")
	I1123 08:58:43.423767 1240463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:58:43.423862 1240463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:58:43.423927 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:43.442025 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:58:43.547251 1240463 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:58:43.550823 1240463 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:58:43.550892 1240463 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:58:43.550917 1240463 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:58:43.550988 1240463 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:58:43.551072 1240463 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:58:43.551228 1240463 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:58:43.558825 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:58:43.577376 1240463 start.go:296] duration metric: took 153.605832ms for postStartSetup
	I1123 08:58:43.577803 1240463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-591175
	I1123 08:58:43.595850 1240463 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/config.json ...
	I1123 08:58:43.596141 1240463 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:58:43.596192 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:43.613717 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:58:43.716325 1240463 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:58:43.721329 1240463 start.go:128] duration metric: took 6.565610133s to createHost
	I1123 08:58:43.721356 1240463 start.go:83] releasing machines lock for "no-preload-591175", held for 6.565743865s
	I1123 08:58:43.721434 1240463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-591175
	I1123 08:58:43.738971 1240463 ssh_runner.go:195] Run: cat /version.json
	I1123 08:58:43.739024 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:43.739082 1240463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:58:43.739146 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:43.757034 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:58:43.757279 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:58:43.862906 1240463 ssh_runner.go:195] Run: systemctl --version
	I1123 08:58:43.957997 1240463 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:58:43.992287 1240463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:58:43.997097 1240463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:58:43.997167 1240463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:58:44.030329 1240463 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:58:44.030351 1240463 start.go:496] detecting cgroup driver to use...
	I1123 08:58:44.030404 1240463 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:58:44.030478 1240463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:58:44.049644 1240463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:58:44.062569 1240463 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:58:44.062671 1240463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:58:44.081468 1240463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:58:44.101459 1240463 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:58:44.227925 1240463 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:58:44.359478 1240463 docker.go:234] disabling docker service ...
	I1123 08:58:44.359549 1240463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:58:44.383227 1240463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:58:44.398566 1240463 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:58:44.538990 1240463 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:58:44.665125 1240463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:58:44.678808 1240463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:58:44.698846 1240463 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:58:44.698928 1240463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.707741 1240463 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:58:44.707869 1240463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.716942 1240463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.728685 1240463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.741978 1240463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:58:44.750885 1240463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.761458 1240463 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.777867 1240463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:44.787222 1240463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:58:44.794961 1240463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:58:44.802414 1240463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:44.932515 1240463 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:58:45.180737 1240463 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:58:45.180911 1240463 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:58:45.193707 1240463 start.go:564] Will wait 60s for crictl version
	I1123 08:58:45.194042 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.203512 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:58:45.272285 1240463 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:58:45.272420 1240463 ssh_runner.go:195] Run: crio --version
	I1123 08:58:45.312909 1240463 ssh_runner.go:195] Run: crio --version
	I1123 08:58:45.349605 1240463 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1123 08:58:41.451898 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:43.452750 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:45.952903 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	I1123 08:58:45.352604 1240463 cli_runner.go:164] Run: docker network inspect no-preload-591175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:58:45.368196 1240463 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:58:45.371894 1240463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:58:45.383454 1240463 kubeadm.go:884] updating cluster {Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:58:45.383570 1240463 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:58:45.383614 1240463 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:58:45.407452 1240463 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1123 08:58:45.407476 1240463 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1123 08:58:45.407514 1240463 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:45.407714 1240463 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:45.407802 1240463 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:45.407833 1240463 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:45.407942 1240463 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:45.407992 1240463 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:45.408039 1240463 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:45.408127 1240463 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1123 08:58:45.410067 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:45.410331 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:45.410495 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:45.410644 1240463 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:45.410790 1240463 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1123 08:58:45.410932 1240463 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:45.411283 1240463 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:45.411537 1240463 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:45.619870 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:45.636228 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:45.636510 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:45.649814 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:45.650002 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1123 08:58:45.656674 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:45.689467 1240463 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1123 08:58:45.689511 1240463 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:45.689559 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.694353 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:45.769880 1240463 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1123 08:58:45.769969 1240463 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:45.770055 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.770123 1240463 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1123 08:58:45.770401 1240463 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:45.770455 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.770171 1240463 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1123 08:58:45.770542 1240463 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1123 08:58:45.770618 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.770261 1240463 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1123 08:58:45.770677 1240463 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:45.770699 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.770308 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:45.770216 1240463 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1123 08:58:45.770730 1240463 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:45.770749 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.840935 1240463 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1123 08:58:45.840998 1240463 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:45.841051 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:45.864873 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:45.864945 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:45.865001 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:58:45.865051 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:45.865107 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:45.865198 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:45.869026 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:46.001817 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:46.001892 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:46.001939 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:46.001984 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:46.002460 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:58:46.002534 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:46.002912 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:58:46.116360 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:58:46.116451 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:58:46.116516 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:58:46.116578 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:58:46.116638 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:58:46.116690 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1123 08:58:46.116756 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:58:46.116817 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:58:46.193921 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1123 08:58:46.194005 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1123 08:58:46.194163 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1123 08:58:46.194283 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:58:46.216115 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1123 08:58:46.216213 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:58:46.216279 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1123 08:58:46.216324 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:58:46.216367 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1123 08:58:46.216410 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:58:46.216457 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1123 08:58:46.216471 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1123 08:58:46.216510 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1123 08:58:46.216547 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:58:46.216588 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1123 08:58:46.216600 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1123 08:58:46.216632 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1123 08:58:46.216643 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1123 08:58:46.227026 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1123 08:58:46.227076 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1123 08:58:46.261475 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1123 08:58:46.261525 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1123 08:58:46.261590 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1123 08:58:46.261607 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1123 08:58:46.261665 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1123 08:58:46.261681 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1123 08:58:46.286715 1240463 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1123 08:58:46.286799 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1123 08:58:46.297922 1240463 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1123 08:58:46.297965 1240463 retry.go:31] will retry after 136.551449ms: ssh: rejected: connect failed (open failed)
	I1123 08:58:46.434687 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:58:46.473536 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:58:46.659766 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	W1123 08:58:46.707240 1240463 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1123 08:58:46.707559 1240463 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:46.805045 1240463 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:58:46.805117 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1123 08:58:48.451881 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	W1123 08:58:50.453175 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	I1123 08:58:46.957633 1240463 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1123 08:58:46.957674 1240463 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:46.957731 1240463 ssh_runner.go:195] Run: which crictl
	I1123 08:58:48.624896 1240463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.819750901s)
	I1123 08:58:48.624921 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1123 08:58:48.624937 1240463 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:58:48.624983 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:58:48.625031 1240463 ssh_runner.go:235] Completed: which crictl: (1.667286947s)
	I1123 08:58:48.625054 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:50.324694 1240463 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.699619678s)
	I1123 08:58:50.324766 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:50.324892 1240463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.699900024s)
	I1123 08:58:50.324908 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1123 08:58:50.324925 1240463 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:58:50.324949 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:58:50.352418 1240463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:51.530119 1240463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.205136667s)
	I1123 08:58:51.530145 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1123 08:58:51.530162 1240463 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:58:51.530162 1240463 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.177715447s)
	I1123 08:58:51.530199 1240463 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1123 08:58:51.530212 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:58:51.530277 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	W1123 08:58:52.472529 1236855 pod_ready.go:104] pod "coredns-66bc5c9577-r5lt5" is not "Ready", error: <nil>
	I1123 08:58:52.951507 1236855 pod_ready.go:94] pod "coredns-66bc5c9577-r5lt5" is "Ready"
	I1123 08:58:52.951533 1236855 pod_ready.go:86] duration metric: took 37.005532256s for pod "coredns-66bc5c9577-r5lt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:52.954271 1236855 pod_ready.go:83] waiting for pod "etcd-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:52.959897 1236855 pod_ready.go:94] pod "etcd-embed-certs-879861" is "Ready"
	I1123 08:58:52.959925 1236855 pod_ready.go:86] duration metric: took 5.630037ms for pod "etcd-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:52.962737 1236855 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:52.968313 1236855 pod_ready.go:94] pod "kube-apiserver-embed-certs-879861" is "Ready"
	I1123 08:58:52.968382 1236855 pod_ready.go:86] duration metric: took 5.581357ms for pod "kube-apiserver-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:52.971144 1236855 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:53.149572 1236855 pod_ready.go:94] pod "kube-controller-manager-embed-certs-879861" is "Ready"
	I1123 08:58:53.149600 1236855 pod_ready.go:86] duration metric: took 178.385211ms for pod "kube-controller-manager-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:53.349682 1236855 pod_ready.go:83] waiting for pod "kube-proxy-bf5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:53.750199 1236855 pod_ready.go:94] pod "kube-proxy-bf5ck" is "Ready"
	I1123 08:58:53.750224 1236855 pod_ready.go:86] duration metric: took 400.516781ms for pod "kube-proxy-bf5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:53.949108 1236855 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:54.349090 1236855 pod_ready.go:94] pod "kube-scheduler-embed-certs-879861" is "Ready"
	I1123 08:58:54.349120 1236855 pod_ready.go:86] duration metric: took 399.990083ms for pod "kube-scheduler-embed-certs-879861" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:58:54.349131 1236855 pod_ready.go:40] duration metric: took 38.407011262s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:58:54.415303 1236855 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:58:54.418862 1236855 out.go:179] * Done! kubectl is now configured to use "embed-certs-879861" cluster and "default" namespace by default
	I1123 08:58:52.929997 1240463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.39976243s)
	I1123 08:58:52.930026 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1123 08:58:52.930043 1240463 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:58:52.930041 1240463 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.399747135s)
	I1123 08:58:52.930064 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1123 08:58:52.930085 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1123 08:58:52.930093 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:58:54.392283 1240463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.462170298s)
	I1123 08:58:54.392310 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1123 08:58:54.392329 1240463 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:58:54.392375 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:58:58.211727 1240463 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.819324837s)
	I1123 08:58:58.211752 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1123 08:58:58.211770 1240463 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:58:58.211821 1240463 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:58:58.765786 1240463 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1123 08:58:58.765828 1240463 cache_images.go:125] Successfully loaded all cached images
	I1123 08:58:58.765835 1240463 cache_images.go:94] duration metric: took 13.358345309s to LoadCachedImages
	I1123 08:58:58.765846 1240463 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 08:58:58.765931 1240463 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-591175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:58:58.766013 1240463 ssh_runner.go:195] Run: crio config
	I1123 08:58:58.842899 1240463 cni.go:84] Creating CNI manager for ""
	I1123 08:58:58.842919 1240463 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:58:58.842937 1240463 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:58:58.842960 1240463 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-591175 NodeName:no-preload-591175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:58:58.843083 1240463 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-591175"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:58:58.843156 1240463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:58:58.851245 1240463 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1123 08:58:58.851312 1240463 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1123 08:58:58.858773 1240463 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1123 08:58:58.858869 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1123 08:58:58.859328 1240463 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1123 08:58:58.859760 1240463 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1123 08:58:58.862896 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1123 08:58:58.862922 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1123 08:58:59.785752 1240463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:58:59.798943 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1123 08:58:59.802356 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1123 08:58:59.802393 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1123 08:59:00.245994 1240463 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1123 08:59:00.263970 1240463 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1123 08:59:00.264003 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1123 08:59:00.656078 1240463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:59:00.663566 1240463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 08:59:00.677170 1240463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:59:00.691261 1240463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 08:59:00.704750 1240463 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:59:00.708038 1240463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:00.717399 1240463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:00.837544 1240463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:00.857319 1240463 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175 for IP: 192.168.85.2
	I1123 08:59:00.857340 1240463 certs.go:195] generating shared ca certs ...
	I1123 08:59:00.857356 1240463 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:00.857492 1240463 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:59:00.857540 1240463 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:59:00.857551 1240463 certs.go:257] generating profile certs ...
	I1123 08:59:00.857607 1240463 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.key
	I1123 08:59:00.857623 1240463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt with IP's: []
	I1123 08:59:00.990174 1240463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt ...
	I1123 08:59:00.990204 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: {Name:mkaa4d715caff155fdf8f9316786d20d9ef10f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:00.990434 1240463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.key ...
	I1123 08:59:00.990449 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.key: {Name:mk383de70188bb8a649924aa139520fe91b4660c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:00.990547 1240463 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key.0b835375
	I1123 08:59:00.990567 1240463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.crt.0b835375 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:59:01.360985 1240463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.crt.0b835375 ...
	I1123 08:59:01.361018 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.crt.0b835375: {Name:mk09e048d577c470d6c46750b3088f9a50b07aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:01.361232 1240463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key.0b835375 ...
	I1123 08:59:01.361248 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key.0b835375: {Name:mk4bfc6032c9b270993e51d64472cffe16b701d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:01.361345 1240463 certs.go:382] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.crt.0b835375 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.crt
	I1123 08:59:01.361424 1240463 certs.go:386] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key.0b835375 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key
	I1123 08:59:01.361484 1240463 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.key
	I1123 08:59:01.361501 1240463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.crt with IP's: []
	I1123 08:59:01.415455 1240463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.crt ...
	I1123 08:59:01.415483 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.crt: {Name:mkc4755c9183e83d2edaa4551aab0798e56a8566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:01.415641 1240463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.key ...
	I1123 08:59:01.415656 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.key: {Name:mkd434b226834f082a34f8dd5f7c8fb052327807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:01.415843 1240463 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:59:01.415888 1240463 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:59:01.415897 1240463 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:59:01.415925 1240463 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:59:01.415953 1240463 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:59:01.415982 1240463 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:59:01.416038 1240463 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:59:01.416645 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:59:01.433501 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:59:01.450206 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:59:01.468668 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:59:01.486923 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:59:01.504279 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:59:01.522340 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:59:01.539946 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:59:01.557609 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 08:59:01.576420 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:59:01.593985 1240463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:59:01.610561 1240463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:59:01.627110 1240463 ssh_runner.go:195] Run: openssl version
	I1123 08:59:01.634179 1240463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:59:01.642289 1240463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:59:01.646884 1240463 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:59:01.646997 1240463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:59:01.692144 1240463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:59:01.700122 1240463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:59:01.708127 1240463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:59:01.711628 1240463 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:59:01.711696 1240463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:59:01.752182 1240463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:59:01.760116 1240463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:59:01.767808 1240463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:01.771544 1240463 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:01.771608 1240463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:01.812384 1240463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:59:01.820274 1240463 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:59:01.823462 1240463 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:59:01.823558 1240463 kubeadm.go:401] StartCluster: {Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:01.823656 1240463 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:59:01.823713 1240463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:59:01.862344 1240463 cri.go:89] found id: ""
	I1123 08:59:01.862417 1240463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:59:01.870395 1240463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:59:01.879781 1240463 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:59:01.879854 1240463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:59:01.889097 1240463 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:59:01.889130 1240463 kubeadm.go:158] found existing configuration files:
	
	I1123 08:59:01.889186 1240463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:59:01.896694 1240463 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:59:01.896761 1240463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:59:01.905639 1240463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:59:01.914210 1240463 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:59:01.914285 1240463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:59:01.922061 1240463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:59:01.934091 1240463 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:59:01.934421 1240463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:59:01.943557 1240463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:59:01.951273 1240463 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:59:01.951343 1240463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:59:01.960342 1240463 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:59:02.046910 1240463 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 08:59:02.047147 1240463 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:59:02.128134 1240463 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Nov 23 08:58:41 embed-certs-879861 crio[655]: time="2025-11-23T08:58:41.780083431Z" level=info msg="Removed container 1b56c929b4cb25bb33bd7d63801d893f91da780d378582ed5211467964815e1a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld/dashboard-metrics-scraper" id=db170f73-b2fa-430c-9f8d-58cc9deee745 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:58:45 embed-certs-879861 conmon[1148]: conmon 5219876e4c84dfa8e988 <ninfo>: container 1151 exited with status 1
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.777907157Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bda1e349-e566-41b1-8676-bb4cfe8cbfc5 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.779002781Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=699dc304-dadf-40c2-ac16-ee3cd356781b name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.780182359Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=478d7d73-c0ad-465e-a12d-8c9ea310d362 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.780411045Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.785667686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.785975976Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5ca009394572ae2093a39acb297fe0dd6e31ade8522d79cac43b2dd2a18aaf8b/merged/etc/passwd: no such file or directory"
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.786087662Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5ca009394572ae2093a39acb297fe0dd6e31ade8522d79cac43b2dd2a18aaf8b/merged/etc/group: no such file or directory"
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.786511174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.842147709Z" level=info msg="Created container 974e41dfca4bb5fa762dfa1e5eecade15b2c4b2c22ad82c75a6372877e2740f1: kube-system/storage-provisioner/storage-provisioner" id=478d7d73-c0ad-465e-a12d-8c9ea310d362 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.843059626Z" level=info msg="Starting container: 974e41dfca4bb5fa762dfa1e5eecade15b2c4b2c22ad82c75a6372877e2740f1" id=5f14ab10-2eb6-4ab8-a747-381769320ca8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:58:45 embed-certs-879861 crio[655]: time="2025-11-23T08:58:45.844662305Z" level=info msg="Started container" PID=1643 containerID=974e41dfca4bb5fa762dfa1e5eecade15b2c4b2c22ad82c75a6372877e2740f1 description=kube-system/storage-provisioner/storage-provisioner id=5f14ab10-2eb6-4ab8-a747-381769320ca8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=84c1ec4cfc6a7a737101537ddf08b02fdd46566d1ed7589f7e8bba1cccaf0282
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.631583677Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.637325801Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.637475458Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.637562413Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.640707884Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.640866124Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.641858014Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.645015857Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.645144116Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.645219889Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.64825534Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:58:55 embed-certs-879861 crio[655]: time="2025-11-23T08:58:55.648361249Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	974e41dfca4bb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   84c1ec4cfc6a7       storage-provisioner                          kube-system
	ceeac5fc728e0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   471bf0a0aed6c       dashboard-metrics-scraper-6ffb444bf9-26pld   kubernetes-dashboard
	d77f359302a17       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   44a69aa60535e       kubernetes-dashboard-855c9754f9-ld9hg        kubernetes-dashboard
	f4c684e5efb0e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   caebf061c8497       busybox                                      default
	65fa45aa8e6ef       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   cbe8ee4456df7       coredns-66bc5c9577-r5lt5                     kube-system
	2cbf8fb48901c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   5e59088b090ed       kindnet-f6j8g                                kube-system
	29b4b15adaa04       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   5181f0e2e8c01       kube-proxy-bf5ck                             kube-system
	5219876e4c84d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   84c1ec4cfc6a7       storage-provisioner                          kube-system
	5aa8c8459e4b9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   42fdda0f8c8b4       kube-scheduler-embed-certs-879861            kube-system
	f36bac59af611       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   ed58ed9cd187d       kube-apiserver-embed-certs-879861            kube-system
	c695f658e9e9e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   1b1235f16aafb       kube-controller-manager-embed-certs-879861   kube-system
	f2d11f1f65489       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2d92e1327f47a       etcd-embed-certs-879861                      kube-system
	
	
	==> coredns [65fa45aa8e6efe637640f88ff9ceb042fd3e516f2413a13626682652b20062b4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55447 - 57190 "HINFO IN 1215072340600001780.6500194520348239823. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022309463s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-879861
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-879861
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=embed-certs-879861
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_56_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:56:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-879861
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:59:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:58:44 +0000   Sun, 23 Nov 2025 08:56:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:58:44 +0000   Sun, 23 Nov 2025 08:56:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:58:44 +0000   Sun, 23 Nov 2025 08:56:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:58:44 +0000   Sun, 23 Nov 2025 08:57:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-879861
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                1503fdcb-cc7b-4ade-b29c-e34b53c3598b
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-r5lt5                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 etcd-embed-certs-879861                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-f6j8g                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-embed-certs-879861             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-embed-certs-879861    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-bf5ck                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-embed-certs-879861             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-26pld    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ld9hg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m20s              kube-proxy       
	  Normal   Starting                 56s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m27s              kubelet          Node embed-certs-879861 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m27s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m27s              kubelet          Node embed-certs-879861 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m27s              kubelet          Node embed-certs-879861 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m27s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m22s              node-controller  Node embed-certs-879861 event: Registered Node embed-certs-879861 in Controller
	  Normal   NodeReady                100s               kubelet          Node embed-certs-879861 status is now: NodeReady
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node embed-certs-879861 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node embed-certs-879861 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node embed-certs-879861 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node embed-certs-879861 event: Registered Node embed-certs-879861 in Controller
	
	
	==> dmesg <==
	[Nov23 08:36] overlayfs: idmapped layers are currently not supported
	[Nov23 08:37] overlayfs: idmapped layers are currently not supported
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	[Nov23 08:55] overlayfs: idmapped layers are currently not supported
	[Nov23 08:56] overlayfs: idmapped layers are currently not supported
	[Nov23 08:57] overlayfs: idmapped layers are currently not supported
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	[ +37.440319] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f2d11f1f6548926a628756b6786fd0d702c8dc1b841329fee5f1f0cb5dd84a13] <==
	{"level":"warn","ts":"2025-11-23T08:58:12.181669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.221549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.259230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.278554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.294099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.314313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.329826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.342964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.414400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.428991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.431083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.444029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.461674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.477775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.500638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.525267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.545153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.560676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.583734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.596745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.624971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.644379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.672840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.691052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:12.828980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38560","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:59:12 up  9:41,  0 user,  load average: 3.35, 3.20, 2.73
	Linux embed-certs-879861 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2cbf8fb48901c7787acff4b1eea16ad8538ae58630a4f3f48f5f5df71adc621d] <==
	I1123 08:58:15.428388       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:58:15.429345       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:58:15.429591       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:58:15.429604       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:58:15.429615       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:58:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:58:15.631375       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:58:15.631444       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:58:15.631480       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:58:15.632233       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:58:45.632122       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:58:45.632188       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:58:45.632295       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:58:45.632442       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1123 08:58:46.931636       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:58:46.931730       1 metrics.go:72] Registering metrics
	I1123 08:58:46.931813       1 controller.go:711] "Syncing nftables rules"
	I1123 08:58:55.631246       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:58:55.631319       1 main.go:301] handling current node
	I1123 08:59:05.639271       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:59:05.639386       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f36bac59af61132cba19015b450f070860b81feac44898c54358545457989e10] <==
	I1123 08:58:14.008203       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 08:58:14.008335       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:58:14.021077       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:58:14.039290       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 08:58:14.039323       1 policy_source.go:240] refreshing policies
	E1123 08:58:14.043389       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:58:14.050805       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:58:14.087306       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 08:58:14.087460       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 08:58:14.087698       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:58:14.095065       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:58:14.096570       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 08:58:14.105067       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 08:58:14.109887       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:58:14.590131       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:58:14.672819       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:58:15.413842       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:58:15.540452       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:58:15.588337       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:58:15.602799       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:58:15.843623       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.74.10"}
	I1123 08:58:15.882823       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.124.197"}
	I1123 08:58:17.526247       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:58:17.773636       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:58:17.873506       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c695f658e9e9e4d1eb46e631dbd8525ddee010d71131bde0f1db699f3f2daa7c] <==
	I1123 08:58:17.317767       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:58:17.317782       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:58:17.317793       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:58:17.318897       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:58:17.320037       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 08:58:17.321177       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:58:17.321246       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:58:17.324529       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:58:17.324542       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:58:17.324637       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:58:17.324694       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:58:17.324725       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:58:17.324753       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:58:17.329211       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:58:17.330285       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:58:17.330290       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:58:17.333482       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:58:17.335702       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:58:17.348974       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:58:17.366307       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:58:17.366436       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:58:17.366555       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:58:17.366648       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-879861"
	I1123 08:58:17.366710       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 08:58:17.367287       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [29b4b15adaa040ee90f26e40b8ffbe32430ac9644e8116b1b5285cd10d5bca0a] <==
	I1123 08:58:15.543002       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:58:15.660444       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:58:15.783305       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:58:15.783352       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 08:58:15.783452       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:58:15.832155       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:58:15.832283       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:58:15.837131       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:58:15.837477       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:58:15.837489       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:58:15.838625       1 config.go:200] "Starting service config controller"
	I1123 08:58:15.838683       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:58:15.844837       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:58:15.844897       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:58:15.844959       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:58:15.845003       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:58:15.845604       1 config.go:309] "Starting node config controller"
	I1123 08:58:15.845651       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:58:15.845679       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:58:15.941233       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:58:15.946019       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:58:15.947596       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5aa8c8459e4b9c23abe051762e95525327017b8430025151409aa986f851ce46] <==
	I1123 08:58:12.515434       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:58:14.305083       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:58:14.305111       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:58:14.310773       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:58:14.310865       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:58:14.310891       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:58:14.310916       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:58:14.313294       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:58:14.313309       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:58:14.314467       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:58:14.314478       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:58:14.411162       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 08:58:14.415293       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:58:14.415300       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:58:18 embed-certs-879861 kubelet[787]: I1123 08:58:18.093975     787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz82c\" (UniqueName: \"kubernetes.io/projected/d3f15842-9da9-4d8d-ae2b-dadc7e55e00a-kube-api-access-wz82c\") pod \"kubernetes-dashboard-855c9754f9-ld9hg\" (UID: \"d3f15842-9da9-4d8d-ae2b-dadc7e55e00a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ld9hg"
	Nov 23 08:58:18 embed-certs-879861 kubelet[787]: W1123 08:58:18.317823     787 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/crio-471bf0a0aed6ce1960c5e9dadff486fc051b601f9090f4856a7cb70ac17d50a7 WatchSource:0}: Error finding container 471bf0a0aed6ce1960c5e9dadff486fc051b601f9090f4856a7cb70ac17d50a7: Status 404 returned error can't find the container with id 471bf0a0aed6ce1960c5e9dadff486fc051b601f9090f4856a7cb70ac17d50a7
	Nov 23 08:58:18 embed-certs-879861 kubelet[787]: W1123 08:58:18.325178     787 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0b83e5e6966d11634b33c941a02fd0920531b2e59478e7858d998e499d8d8dd5/crio-44a69aa60535ec3b554fbbc3efc5b5d40d48e5fb780551673ff02c8fa0fbc01f WatchSource:0}: Error finding container 44a69aa60535ec3b554fbbc3efc5b5d40d48e5fb780551673ff02c8fa0fbc01f: Status 404 returned error can't find the container with id 44a69aa60535ec3b554fbbc3efc5b5d40d48e5fb780551673ff02c8fa0fbc01f
	Nov 23 08:58:22 embed-certs-879861 kubelet[787]: I1123 08:58:22.436521     787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 08:58:22 embed-certs-879861 kubelet[787]: I1123 08:58:22.713264     787 scope.go:117] "RemoveContainer" containerID="a72be53909da8b167dca8d8b5b6b81f55aae1832ad500cbd07a987f5bf988961"
	Nov 23 08:58:23 embed-certs-879861 kubelet[787]: I1123 08:58:23.715610     787 scope.go:117] "RemoveContainer" containerID="1b56c929b4cb25bb33bd7d63801d893f91da780d378582ed5211467964815e1a"
	Nov 23 08:58:23 embed-certs-879861 kubelet[787]: E1123 08:58:23.715777     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-26pld_kubernetes-dashboard(dc44a6a1-381f-4ba1-a950-7b4da68f100d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld" podUID="dc44a6a1-381f-4ba1-a950-7b4da68f100d"
	Nov 23 08:58:23 embed-certs-879861 kubelet[787]: I1123 08:58:23.719675     787 scope.go:117] "RemoveContainer" containerID="a72be53909da8b167dca8d8b5b6b81f55aae1832ad500cbd07a987f5bf988961"
	Nov 23 08:58:24 embed-certs-879861 kubelet[787]: I1123 08:58:24.719055     787 scope.go:117] "RemoveContainer" containerID="1b56c929b4cb25bb33bd7d63801d893f91da780d378582ed5211467964815e1a"
	Nov 23 08:58:24 embed-certs-879861 kubelet[787]: E1123 08:58:24.723408     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-26pld_kubernetes-dashboard(dc44a6a1-381f-4ba1-a950-7b4da68f100d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld" podUID="dc44a6a1-381f-4ba1-a950-7b4da68f100d"
	Nov 23 08:58:27 embed-certs-879861 kubelet[787]: I1123 08:58:27.747481     787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ld9hg" podStartSLOduration=2.1121935 podStartE2EDuration="10.740358241s" podCreationTimestamp="2025-11-23 08:58:17 +0000 UTC" firstStartedPulling="2025-11-23 08:58:18.327973893 +0000 UTC m=+9.976815169" lastFinishedPulling="2025-11-23 08:58:26.956138634 +0000 UTC m=+18.604979910" observedRunningTime="2025-11-23 08:58:27.739637474 +0000 UTC m=+19.388478881" watchObservedRunningTime="2025-11-23 08:58:27.740358241 +0000 UTC m=+19.389199509"
	Nov 23 08:58:28 embed-certs-879861 kubelet[787]: I1123 08:58:28.727566     787 scope.go:117] "RemoveContainer" containerID="1b56c929b4cb25bb33bd7d63801d893f91da780d378582ed5211467964815e1a"
	Nov 23 08:58:28 embed-certs-879861 kubelet[787]: E1123 08:58:28.727785     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-26pld_kubernetes-dashboard(dc44a6a1-381f-4ba1-a950-7b4da68f100d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld" podUID="dc44a6a1-381f-4ba1-a950-7b4da68f100d"
	Nov 23 08:58:41 embed-certs-879861 kubelet[787]: I1123 08:58:41.623601     787 scope.go:117] "RemoveContainer" containerID="1b56c929b4cb25bb33bd7d63801d893f91da780d378582ed5211467964815e1a"
	Nov 23 08:58:41 embed-certs-879861 kubelet[787]: I1123 08:58:41.762872     787 scope.go:117] "RemoveContainer" containerID="1b56c929b4cb25bb33bd7d63801d893f91da780d378582ed5211467964815e1a"
	Nov 23 08:58:41 embed-certs-879861 kubelet[787]: I1123 08:58:41.763152     787 scope.go:117] "RemoveContainer" containerID="ceeac5fc728e0c53507ccd760fb0549c119ea4b2b3d759bbc85de9f0c282089b"
	Nov 23 08:58:41 embed-certs-879861 kubelet[787]: E1123 08:58:41.763397     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-26pld_kubernetes-dashboard(dc44a6a1-381f-4ba1-a950-7b4da68f100d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld" podUID="dc44a6a1-381f-4ba1-a950-7b4da68f100d"
	Nov 23 08:58:45 embed-certs-879861 kubelet[787]: I1123 08:58:45.777310     787 scope.go:117] "RemoveContainer" containerID="5219876e4c84dfa8e988407b4095b408a9a272dd85d5f216ad25d5cb4fed1fe9"
	Nov 23 08:58:48 embed-certs-879861 kubelet[787]: I1123 08:58:48.727717     787 scope.go:117] "RemoveContainer" containerID="ceeac5fc728e0c53507ccd760fb0549c119ea4b2b3d759bbc85de9f0c282089b"
	Nov 23 08:58:48 embed-certs-879861 kubelet[787]: E1123 08:58:48.727891     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-26pld_kubernetes-dashboard(dc44a6a1-381f-4ba1-a950-7b4da68f100d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld" podUID="dc44a6a1-381f-4ba1-a950-7b4da68f100d"
	Nov 23 08:58:59 embed-certs-879861 kubelet[787]: I1123 08:58:59.623874     787 scope.go:117] "RemoveContainer" containerID="ceeac5fc728e0c53507ccd760fb0549c119ea4b2b3d759bbc85de9f0c282089b"
	Nov 23 08:58:59 embed-certs-879861 kubelet[787]: E1123 08:58:59.624096     787 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-26pld_kubernetes-dashboard(dc44a6a1-381f-4ba1-a950-7b4da68f100d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-26pld" podUID="dc44a6a1-381f-4ba1-a950-7b4da68f100d"
	Nov 23 08:59:06 embed-certs-879861 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:59:06 embed-certs-879861 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:59:06 embed-certs-879861 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d77f359302a172c9c103fec56f3daf8bd603240bb30346c5c20f8d13be6368bf] <==
	2025/11/23 08:58:27 Using namespace: kubernetes-dashboard
	2025/11/23 08:58:27 Using in-cluster config to connect to apiserver
	2025/11/23 08:58:27 Using secret token for csrf signing
	2025/11/23 08:58:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:58:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:58:27 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 08:58:27 Generating JWE encryption key
	2025/11/23 08:58:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:58:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:58:27 Initializing JWE encryption key from synchronized object
	2025/11/23 08:58:27 Creating in-cluster Sidecar client
	2025/11/23 08:58:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:58:27 Serving insecurely on HTTP port: 9090
	2025/11/23 08:58:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:58:27 Starting overwatch
	
	
	==> storage-provisioner [5219876e4c84dfa8e988407b4095b408a9a272dd85d5f216ad25d5cb4fed1fe9] <==
	I1123 08:58:15.494362       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:58:45.511001       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [974e41dfca4bb5fa762dfa1e5eecade15b2c4b2c22ad82c75a6372877e2740f1] <==
	I1123 08:58:45.885730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:58:45.901539       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:58:45.901652       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:58:45.910956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:49.366809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:53.627699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:58:57.227775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:00.287543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:03.318140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:03.327393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:59:03.327604       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:59:03.329826       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-879861_8a895dce-dda4-4d31-a9f2-00e1c29552ac!
	I1123 08:59:03.337083       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"862d7238-c68b-409a-ac2b-154a7a322a6b", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-879861_8a895dce-dda4-4d31-a9f2-00e1c29552ac became leader
	W1123 08:59:03.340139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:03.348625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:59:03.430252       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-879861_8a895dce-dda4-4d31-a9f2-00e1c29552ac!
	W1123 08:59:05.352431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:05.361626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:07.371431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:07.384536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:09.387321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:09.406237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:11.415419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:11.428676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-879861 -n embed-certs-879861
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-879861 -n embed-certs-879861: exit status 2 (498.935061ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-879861 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-261704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-261704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (263.724079ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:59:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-261704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-261704
helpers_test.go:243: (dbg) docker inspect newest-cni-261704:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772",
	        "Created": "2025-11-23T08:59:23.410749327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1245018,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:59:23.480259954Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/hosts",
	        "LogPath": "/var/lib/docker/containers/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772-json.log",
	        "Name": "/newest-cni-261704",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-261704:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-261704",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772",
	                "LowerDir": "/var/lib/docker/overlay2/2ae4e31f8fd303775b938dc2f321e4c26fcc60c4aaae4415c302dc6ffd0b5f37-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ae4e31f8fd303775b938dc2f321e4c26fcc60c4aaae4415c302dc6ffd0b5f37/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ae4e31f8fd303775b938dc2f321e4c26fcc60c4aaae4415c302dc6ffd0b5f37/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ae4e31f8fd303775b938dc2f321e4c26fcc60c4aaae4415c302dc6ffd0b5f37/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-261704",
	                "Source": "/var/lib/docker/volumes/newest-cni-261704/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-261704",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-261704",
	                "name.minikube.sigs.k8s.io": "newest-cni-261704",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f573ce931527f93e14f87cbe5aee0990c26857fc7b32575049130d47678f98b9",
	            "SandboxKey": "/var/run/docker/netns/f573ce931527",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34547"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34548"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34551"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34549"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34550"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-261704": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:ba:d6:16:12:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa1dd1d7d3d743751dd4838aea370419371be8ae8924c9730d80d4997d4494cf",
	                    "EndpointID": "db8c5fb363d0829912622db04c448919abc04d9d1855e7d4312348c54f8510fe",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-261704",
	                        "b3bc5f529199"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-261704 -n newest-cni-261704
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-261704 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-261704 logs -n 25: (1.507299026s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-283312                                                                                                                                                                                                                     │ old-k8s-version-283312       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-expiration-322507 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-expiration-322507                                                                                                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-262764 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-262764 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-262764 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable metrics-server -p embed-certs-879861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p embed-certs-879861 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-879861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ image   │ default-k8s-diff-port-262764 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ pause   │ -p default-k8s-diff-port-262764 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p disable-driver-mounts-880590                                                                                                                                                                                                               │ disable-driver-mounts-880590 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:59 UTC │
	│ image   │ embed-certs-879861 image list --format=json                                                                                                                                                                                                   │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ pause   │ -p embed-certs-879861 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ delete  │ -p embed-certs-879861                                                                                                                                                                                                                         │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p embed-certs-879861                                                                                                                                                                                                                         │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ addons  │ enable metrics-server -p newest-cni-261704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:59:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:59:17.452684 1244564 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:59:17.452899 1244564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:17.452926 1244564 out.go:374] Setting ErrFile to fd 2...
	I1123 08:59:17.452946 1244564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:17.453215 1244564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:59:17.453682 1244564 out.go:368] Setting JSON to false
	I1123 08:59:17.454726 1244564 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34903,"bootTime":1763853455,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:59:17.454820 1244564 start.go:143] virtualization:  
	I1123 08:59:17.459172 1244564 out.go:179] * [newest-cni-261704] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:59:17.462622 1244564 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:59:17.462722 1244564 notify.go:221] Checking for updates...
	I1123 08:59:17.469143 1244564 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:59:17.472274 1244564 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:59:17.475342 1244564 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:59:17.478400 1244564 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:59:17.481504 1244564 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:59:17.484979 1244564 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:59:17.485123 1244564 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:59:17.530301 1244564 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:59:17.530484 1244564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:17.630889 1244564 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:59:17.616958979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:59:17.630991 1244564 docker.go:319] overlay module found
	I1123 08:59:17.634191 1244564 out.go:179] * Using the docker driver based on user configuration
	I1123 08:59:17.637148 1244564 start.go:309] selected driver: docker
	I1123 08:59:17.637166 1244564 start.go:927] validating driver "docker" against <nil>
	I1123 08:59:17.637180 1244564 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:59:17.637886 1244564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:17.728212 1244564 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:59:17.718907792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:59:17.728373 1244564 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1123 08:59:17.728398 1244564 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1123 08:59:17.728633 1244564 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:59:17.733216 1244564 out.go:179] * Using Docker driver with root privileges
	I1123 08:59:17.736084 1244564 cni.go:84] Creating CNI manager for ""
	I1123 08:59:17.736155 1244564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:59:17.736166 1244564 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:59:17.736246 1244564 start.go:353] cluster config:
	{Name:newest-cni-261704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-261704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:17.739470 1244564 out.go:179] * Starting "newest-cni-261704" primary control-plane node in "newest-cni-261704" cluster
	I1123 08:59:17.742381 1244564 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:59:17.745320 1244564 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:59:17.748124 1244564 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:59:17.748178 1244564 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 08:59:17.748191 1244564 cache.go:65] Caching tarball of preloaded images
	I1123 08:59:17.748271 1244564 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:59:17.748286 1244564 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:59:17.748402 1244564 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/config.json ...
	I1123 08:59:17.748425 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/config.json: {Name:mkcebbfe251a76b43ceb568921f830f3797ff098 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:17.748578 1244564 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:59:17.774653 1244564 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:59:17.774677 1244564 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:59:17.774697 1244564 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:59:17.774726 1244564 start.go:360] acquireMachinesLock for newest-cni-261704: {Name:mkc157815d36ad5358be83723f2b82d59972bd12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:59:17.774827 1244564 start.go:364] duration metric: took 81.606µs to acquireMachinesLock for "newest-cni-261704"
	I1123 08:59:17.774857 1244564 start.go:93] Provisioning new machine with config: &{Name:newest-cni-261704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-261704 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:59:17.774927 1244564 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:59:17.778434 1244564 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:59:17.778664 1244564 start.go:159] libmachine.API.Create for "newest-cni-261704" (driver="docker")
	I1123 08:59:17.778707 1244564 client.go:173] LocalClient.Create starting
	I1123 08:59:17.778768 1244564 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem
	I1123 08:59:17.778825 1244564 main.go:143] libmachine: Decoding PEM data...
	I1123 08:59:17.778849 1244564 main.go:143] libmachine: Parsing certificate...
	I1123 08:59:17.778900 1244564 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem
	I1123 08:59:17.778922 1244564 main.go:143] libmachine: Decoding PEM data...
	I1123 08:59:17.778938 1244564 main.go:143] libmachine: Parsing certificate...
	I1123 08:59:17.779331 1244564 cli_runner.go:164] Run: docker network inspect newest-cni-261704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:59:17.796536 1244564 cli_runner.go:211] docker network inspect newest-cni-261704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:59:17.796624 1244564 network_create.go:284] running [docker network inspect newest-cni-261704] to gather additional debugging logs...
	I1123 08:59:17.796644 1244564 cli_runner.go:164] Run: docker network inspect newest-cni-261704
	W1123 08:59:17.810963 1244564 cli_runner.go:211] docker network inspect newest-cni-261704 returned with exit code 1
	I1123 08:59:17.810996 1244564 network_create.go:287] error running [docker network inspect newest-cni-261704]: docker network inspect newest-cni-261704: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-261704 not found
	I1123 08:59:17.811009 1244564 network_create.go:289] output of [docker network inspect newest-cni-261704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-261704 not found
	
	** /stderr **
	I1123 08:59:17.811118 1244564 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:17.826871 1244564 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-32d396d9f7df IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:9b:29:4a:5c:ab} reservation:<nil>}
	I1123 08:59:17.827176 1244564 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-859c97accd92 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:ea:cf:62:f4:f8} reservation:<nil>}
	I1123 08:59:17.827546 1244564 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-50e966d7b39a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:1d:b6:b9:b9:ef} reservation:<nil>}
	I1123 08:59:17.827958 1244564 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e66d0}
	I1123 08:59:17.827982 1244564 network_create.go:124] attempt to create docker network newest-cni-261704 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 08:59:17.828044 1244564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-261704 newest-cni-261704
	I1123 08:59:17.897302 1244564 network_create.go:108] docker network newest-cni-261704 192.168.76.0/24 created
	I1123 08:59:17.897338 1244564 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-261704" container
	I1123 08:59:17.897415 1244564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:59:17.917695 1244564 cli_runner.go:164] Run: docker volume create newest-cni-261704 --label name.minikube.sigs.k8s.io=newest-cni-261704 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:59:17.946775 1244564 oci.go:103] Successfully created a docker volume newest-cni-261704
	I1123 08:59:17.946872 1244564 cli_runner.go:164] Run: docker run --rm --name newest-cni-261704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-261704 --entrypoint /usr/bin/test -v newest-cni-261704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:59:18.596670 1244564 oci.go:107] Successfully prepared a docker volume newest-cni-261704
	I1123 08:59:18.596752 1244564 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:59:18.596763 1244564 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:59:18.596833 1244564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-261704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:59:22.772927 1240463 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:59:22.773007 1240463 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:59:22.773122 1240463 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:59:22.773195 1240463 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:59:22.773240 1240463 kubeadm.go:319] OS: Linux
	I1123 08:59:22.773291 1240463 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:59:22.773346 1240463 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:59:22.773396 1240463 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:59:22.773455 1240463 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:59:22.773509 1240463 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:59:22.773567 1240463 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:59:22.773623 1240463 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:59:22.773677 1240463 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:59:22.773736 1240463 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:59:22.773822 1240463 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:59:22.773938 1240463 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:59:22.774067 1240463 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:59:22.774147 1240463 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:59:22.789914 1240463 out.go:252]   - Generating certificates and keys ...
	I1123 08:59:22.790057 1240463 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:59:22.790123 1240463 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:59:22.790201 1240463 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:59:22.790266 1240463 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:59:22.790343 1240463 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:59:22.790394 1240463 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:59:22.790449 1240463 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:59:22.790599 1240463 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-591175] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:59:22.790658 1240463 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:59:22.790786 1240463 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-591175] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:59:22.790857 1240463 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:59:22.790926 1240463 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:59:22.790976 1240463 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:59:22.791032 1240463 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:59:22.791082 1240463 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:59:22.791151 1240463 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:59:22.791302 1240463 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:59:22.791367 1240463 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:59:22.791421 1240463 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:59:22.791511 1240463 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:59:22.791577 1240463 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:59:22.825528 1240463 out.go:252]   - Booting up control plane ...
	I1123 08:59:22.825645 1240463 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:59:22.825736 1240463 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:59:22.825813 1240463 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:59:22.825928 1240463 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:59:22.826031 1240463 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:59:22.826146 1240463 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:59:22.826240 1240463 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:59:22.826286 1240463 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:59:22.826429 1240463 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:59:22.826543 1240463 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:59:22.826610 1240463 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001104823s
	I1123 08:59:22.826712 1240463 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:59:22.826802 1240463 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 08:59:22.826898 1240463 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:59:22.826982 1240463 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:59:22.827062 1240463 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.133732751s
	I1123 08:59:22.827133 1240463 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.57464497s
	I1123 08:59:22.827237 1240463 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.50245645s
	I1123 08:59:22.827402 1240463 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:59:22.827571 1240463 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:59:22.827654 1240463 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:59:22.827844 1240463 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-591175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:59:22.827928 1240463 kubeadm.go:319] [bootstrap-token] Using token: prto18.avdat22o9zcyjdgf
	I1123 08:59:22.853241 1240463 out.go:252]   - Configuring RBAC rules ...
	I1123 08:59:22.853370 1240463 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:59:22.853463 1240463 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:59:22.853610 1240463 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:59:22.853767 1240463 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:59:22.853898 1240463 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:59:22.854027 1240463 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:59:22.854156 1240463 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:59:22.854214 1240463 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:59:22.854271 1240463 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:59:22.854282 1240463 kubeadm.go:319] 
	I1123 08:59:22.854363 1240463 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:59:22.854381 1240463 kubeadm.go:319] 
	I1123 08:59:22.854465 1240463 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:59:22.854472 1240463 kubeadm.go:319] 
	I1123 08:59:22.854498 1240463 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:59:22.854587 1240463 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:59:22.854686 1240463 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:59:22.854701 1240463 kubeadm.go:319] 
	I1123 08:59:22.854765 1240463 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:59:22.854772 1240463 kubeadm.go:319] 
	I1123 08:59:22.854853 1240463 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:59:22.854866 1240463 kubeadm.go:319] 
	I1123 08:59:22.854924 1240463 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:59:22.855013 1240463 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:59:22.855090 1240463 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:59:22.855097 1240463 kubeadm.go:319] 
	I1123 08:59:22.855262 1240463 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:59:22.855355 1240463 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:59:22.855362 1240463 kubeadm.go:319] 
	I1123 08:59:22.855460 1240463 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token prto18.avdat22o9zcyjdgf \
	I1123 08:59:22.855577 1240463 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb \
	I1123 08:59:22.855603 1240463 kubeadm.go:319] 	--control-plane 
	I1123 08:59:22.855610 1240463 kubeadm.go:319] 
	I1123 08:59:22.855701 1240463 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:59:22.855709 1240463 kubeadm.go:319] 
	I1123 08:59:22.855797 1240463 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token prto18.avdat22o9zcyjdgf \
	I1123 08:59:22.855925 1240463 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb 
	I1123 08:59:22.855937 1240463 cni.go:84] Creating CNI manager for ""
	I1123 08:59:22.855945 1240463 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:59:22.887198 1240463 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:59:22.914726 1240463 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:59:22.918859 1240463 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:59:22.918877 1240463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:59:22.932290 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:59:23.253021 1240463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:59:23.253117 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:23.253156 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-591175 minikube.k8s.io/updated_at=2025_11_23T08_59_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=no-preload-591175 minikube.k8s.io/primary=true
	I1123 08:59:23.284136 1240463 ops.go:34] apiserver oom_adj: -16
	I1123 08:59:23.621460 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:24.122108 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:24.621646 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:25.121502 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:25.622097 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:26.121650 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:26.621704 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:27.121638 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:27.236338 1240463 kubeadm.go:1114] duration metric: took 3.983274333s to wait for elevateKubeSystemPrivileges
	I1123 08:59:27.236364 1240463 kubeadm.go:403] duration metric: took 25.412811168s to StartCluster
	I1123 08:59:27.236384 1240463 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:27.236448 1240463 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:59:27.237079 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:27.237291 1240463 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:59:27.237428 1240463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:59:27.237673 1240463 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:59:27.237639 1240463 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:59:27.237720 1240463 addons.go:70] Setting default-storageclass=true in profile "no-preload-591175"
	I1123 08:59:27.237740 1240463 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-591175"
	I1123 08:59:27.238058 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 08:59:27.237720 1240463 addons.go:70] Setting storage-provisioner=true in profile "no-preload-591175"
	I1123 08:59:27.238477 1240463 addons.go:239] Setting addon storage-provisioner=true in "no-preload-591175"
	I1123 08:59:27.238508 1240463 host.go:66] Checking if "no-preload-591175" exists ...
	I1123 08:59:27.238917 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 08:59:27.240449 1240463 out.go:179] * Verifying Kubernetes components...
	I1123 08:59:27.244144 1240463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:27.269242 1240463 addons.go:239] Setting addon default-storageclass=true in "no-preload-591175"
	I1123 08:59:27.269283 1240463 host.go:66] Checking if "no-preload-591175" exists ...
	I1123 08:59:27.269696 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 08:59:27.287040 1240463 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:59:23.285755 1244564 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-261704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.688886773s)
	I1123 08:59:23.285788 1244564 kic.go:203] duration metric: took 4.689022326s to extract preloaded images to volume ...
	W1123 08:59:23.285918 1244564 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:59:23.286011 1244564 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:59:23.379608 1244564 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-261704 --name newest-cni-261704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-261704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-261704 --network newest-cni-261704 --ip 192.168.76.2 --volume newest-cni-261704:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:59:23.773470 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Running}}
	I1123 08:59:23.791151 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Status}}
	I1123 08:59:23.812608 1244564 cli_runner.go:164] Run: docker exec newest-cni-261704 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:59:23.874956 1244564 oci.go:144] the created container "newest-cni-261704" has a running status.
	I1123 08:59:23.874985 1244564 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa...
	I1123 08:59:24.036379 1244564 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:59:24.064859 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Status}}
	I1123 08:59:24.093693 1244564 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:59:24.093719 1244564 kic_runner.go:114] Args: [docker exec --privileged newest-cni-261704 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:59:24.167294 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Status}}
	I1123 08:59:24.186569 1244564 machine.go:94] provisionDockerMachine start ...
	I1123 08:59:24.186671 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:24.215125 1244564 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:24.219292 1244564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1123 08:59:24.219312 1244564 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:59:24.220081 1244564 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:59:27.426651 1244564 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-261704
	
	I1123 08:59:27.426674 1244564 ubuntu.go:182] provisioning hostname "newest-cni-261704"
	I1123 08:59:27.426736 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:27.293253 1240463 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:59:27.293287 1240463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:59:27.293351 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:59:27.308661 1240463 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:59:27.308682 1240463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:59:27.308741 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:59:27.331293 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:59:27.347595 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:59:27.670666 1240463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:59:27.731463 1240463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:59:27.808562 1240463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:59:27.808679 1240463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:29.106182 1240463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.374683804s)
	I1123 08:59:29.106268 1240463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.297678176s)
	I1123 08:59:29.106286 1240463 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 08:59:29.107443 1240463 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.298741203s)
	I1123 08:59:29.108068 1240463 node_ready.go:35] waiting up to 6m0s for node "no-preload-591175" to be "Ready" ...
	I1123 08:59:29.111270 1240463 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 08:59:27.454375 1244564 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:27.454684 1244564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1123 08:59:27.454695 1244564 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-261704 && echo "newest-cni-261704" | sudo tee /etc/hostname
	I1123 08:59:27.661034 1244564 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-261704
	
	I1123 08:59:27.661157 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:27.690249 1244564 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:27.690555 1244564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1123 08:59:27.690577 1244564 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-261704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-261704/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-261704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:59:27.871593 1244564 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:59:27.871622 1244564 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:59:27.871664 1244564 ubuntu.go:190] setting up certificates
	I1123 08:59:27.871678 1244564 provision.go:84] configureAuth start
	I1123 08:59:27.871756 1244564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-261704
	I1123 08:59:27.896065 1244564 provision.go:143] copyHostCerts
	I1123 08:59:27.896137 1244564 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:59:27.896150 1244564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:59:27.896229 1244564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:59:27.896327 1244564 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:59:27.896338 1244564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:59:27.896366 1244564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:59:27.896431 1244564 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:59:27.896440 1244564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:59:27.896464 1244564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:59:27.896523 1244564 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.newest-cni-261704 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-261704]
	I1123 08:59:28.047101 1244564 provision.go:177] copyRemoteCerts
	I1123 08:59:28.047172 1244564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:59:28.047238 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:28.071237 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:28.183499 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:59:28.208122 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:59:28.235662 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:59:28.261414 1244564 provision.go:87] duration metric: took 389.708755ms to configureAuth
	I1123 08:59:28.261455 1244564 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:59:28.261674 1244564 config.go:182] Loaded profile config "newest-cni-261704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:59:28.261790 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:28.283704 1244564 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:28.284040 1244564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1123 08:59:28.284061 1244564 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:59:28.674972 1244564 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:59:28.674995 1244564 machine.go:97] duration metric: took 4.488406812s to provisionDockerMachine
	I1123 08:59:28.675006 1244564 client.go:176] duration metric: took 10.89628715s to LocalClient.Create
	I1123 08:59:28.675036 1244564 start.go:167] duration metric: took 10.896373572s to libmachine.API.Create "newest-cni-261704"
	I1123 08:59:28.675046 1244564 start.go:293] postStartSetup for "newest-cni-261704" (driver="docker")
	I1123 08:59:28.675056 1244564 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:59:28.675135 1244564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:59:28.675209 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:28.707636 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:28.825182 1244564 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:59:28.833667 1244564 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:59:28.833701 1244564 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:59:28.833712 1244564 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:59:28.833765 1244564 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:59:28.833847 1244564 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:59:28.833949 1244564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:59:28.845824 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:59:28.874731 1244564 start.go:296] duration metric: took 199.670583ms for postStartSetup
	I1123 08:59:28.875147 1244564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-261704
	I1123 08:59:28.898866 1244564 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/config.json ...
	I1123 08:59:28.899764 1244564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:59:28.899885 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:28.923155 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:29.036840 1244564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:59:29.041900 1244564 start.go:128] duration metric: took 11.266959559s to createHost
	I1123 08:59:29.041924 1244564 start.go:83] releasing machines lock for "newest-cni-261704", held for 11.267082271s
	I1123 08:59:29.042000 1244564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-261704
	I1123 08:59:29.058864 1244564 ssh_runner.go:195] Run: cat /version.json
	I1123 08:59:29.058922 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:29.059167 1244564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:59:29.059255 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:29.101630 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:29.109383 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:29.327177 1244564 ssh_runner.go:195] Run: systemctl --version
	I1123 08:59:29.333826 1244564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:59:29.386629 1244564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:59:29.391969 1244564 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:59:29.392114 1244564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:59:29.434140 1244564 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:59:29.434217 1244564 start.go:496] detecting cgroup driver to use...
	I1123 08:59:29.434264 1244564 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:59:29.434350 1244564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:59:29.458827 1244564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:59:29.471807 1244564 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:59:29.471918 1244564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:59:29.494235 1244564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:59:29.522077 1244564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:59:29.683237 1244564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:59:29.862882 1244564 docker.go:234] disabling docker service ...
	I1123 08:59:29.862998 1244564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:59:29.887228 1244564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:59:29.901156 1244564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:59:30.057516 1244564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:59:30.228173 1244564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:59:30.243336 1244564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:59:30.258724 1244564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:59:30.258845 1244564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.268384 1244564 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:59:30.268532 1244564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.276955 1244564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.285928 1244564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.294213 1244564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:59:30.301734 1244564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.309965 1244564 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.322335 1244564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.330550 1244564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:59:30.339255 1244564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:59:30.346854 1244564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:30.498019 1244564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:59:30.805999 1244564 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:59:30.806142 1244564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:59:30.811021 1244564 start.go:564] Will wait 60s for crictl version
	I1123 08:59:30.811139 1244564 ssh_runner.go:195] Run: which crictl
	I1123 08:59:30.815171 1244564 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:59:30.856840 1244564 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:59:30.856992 1244564 ssh_runner.go:195] Run: crio --version
	I1123 08:59:30.913613 1244564 ssh_runner.go:195] Run: crio --version
	I1123 08:59:30.948934 1244564 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:59:30.951901 1244564 cli_runner.go:164] Run: docker network inspect newest-cni-261704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:30.968763 1244564 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:59:30.972911 1244564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:30.985213 1244564 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 08:59:29.113413 1240463 addons.go:530] duration metric: took 1.875770735s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 08:59:29.610065 1240463 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-591175" context rescaled to 1 replicas
	W1123 08:59:31.112299 1240463 node_ready.go:57] node "no-preload-591175" has "Ready":"False" status (will retry)
	I1123 08:59:30.988156 1244564 kubeadm.go:884] updating cluster {Name:newest-cni-261704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-261704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:59:30.988320 1244564 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:59:30.988395 1244564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:31.039444 1244564 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:59:31.039467 1244564 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:59:31.039525 1244564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:31.077230 1244564 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:59:31.077257 1244564 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:59:31.077265 1244564 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 08:59:31.077351 1244564 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-261704 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-261704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:59:31.077437 1244564 ssh_runner.go:195] Run: crio config
	I1123 08:59:31.171766 1244564 cni.go:84] Creating CNI manager for ""
	I1123 08:59:31.171787 1244564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:59:31.171802 1244564 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 08:59:31.171825 1244564 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-261704 NodeName:newest-cni-261704 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:59:31.171952 1244564 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-261704"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:59:31.172023 1244564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:59:31.180242 1244564 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:59:31.180313 1244564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:59:31.188269 1244564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 08:59:31.207676 1244564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:59:31.231426 1244564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1123 08:59:31.251774 1244564 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:59:31.256160 1244564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:31.267941 1244564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:31.402813 1244564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:31.418811 1244564 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704 for IP: 192.168.76.2
	I1123 08:59:31.418831 1244564 certs.go:195] generating shared ca certs ...
	I1123 08:59:31.418847 1244564 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:31.418977 1244564 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:59:31.419028 1244564 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:59:31.419039 1244564 certs.go:257] generating profile certs ...
	I1123 08:59:31.419096 1244564 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/client.key
	I1123 08:59:31.419115 1244564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/client.crt with IP's: []
	I1123 08:59:31.524617 1244564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/client.crt ...
	I1123 08:59:31.524676 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/client.crt: {Name:mk2d18aee4f34c09c800bf35993d941bb666bf5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:31.524935 1244564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/client.key ...
	I1123 08:59:31.524952 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/client.key: {Name:mkb299d5939af82bb93a5f43963524c7ffce0dae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:31.525178 1244564 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.key.059e974a
	I1123 08:59:31.525212 1244564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.crt.059e974a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 08:59:31.980619 1244564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.crt.059e974a ...
	I1123 08:59:31.980650 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.crt.059e974a: {Name:mka6bcec19cace4fe3d2e25c3dbc530242271126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:31.980822 1244564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.key.059e974a ...
	I1123 08:59:31.980836 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.key.059e974a: {Name:mkcc07b3e148167fe5d23f183080a103a6b6316e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:31.980920 1244564 certs.go:382] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.crt.059e974a -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.crt
	I1123 08:59:31.980999 1244564 certs.go:386] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.key.059e974a -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.key
	I1123 08:59:31.981061 1244564 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.key
	I1123 08:59:31.981078 1244564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.crt with IP's: []
	I1123 08:59:32.261670 1244564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.crt ...
	I1123 08:59:32.261700 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.crt: {Name:mk8a6a8c02de1362065c2dad356d7efb7d5cfcc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:32.261887 1244564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.key ...
	I1123 08:59:32.261902 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.key: {Name:mkf9d35cf9ea7535b4a7d7eef85ab018f7c0ee67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:32.262101 1244564 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:59:32.262151 1244564 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:59:32.262165 1244564 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:59:32.262192 1244564 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:59:32.262219 1244564 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:59:32.262246 1244564 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:59:32.262296 1244564 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:59:32.262891 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:59:32.281535 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:59:32.299461 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:59:32.317887 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:59:32.339781 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:59:32.368772 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:59:32.393020 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:59:32.410478 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:59:32.430366 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:59:32.449946 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	W1123 08:59:33.611163 1240463 node_ready.go:57] node "no-preload-591175" has "Ready":"False" status (will retry)
	W1123 08:59:35.611342 1240463 node_ready.go:57] node "no-preload-591175" has "Ready":"False" status (will retry)
	I1123 08:59:32.470300 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:59:32.487977 1244564 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:59:32.500357 1244564 ssh_runner.go:195] Run: openssl version
	I1123 08:59:32.513388 1244564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:59:32.523174 1244564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:59:32.527075 1244564 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:59:32.527216 1244564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:59:32.578595 1244564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:59:32.587052 1244564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:59:32.595167 1244564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:32.598614 1244564 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:32.598707 1244564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:32.648282 1244564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:59:32.656664 1244564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:59:32.666560 1244564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:59:32.670341 1244564 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:59:32.670405 1244564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:59:32.717692 1244564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:59:32.726869 1244564 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:59:32.734801 1244564 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:59:32.734921 1244564 kubeadm.go:401] StartCluster: {Name:newest-cni-261704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-261704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:32.735004 1244564 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:59:32.735062 1244564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:59:32.762871 1244564 cri.go:89] found id: ""
	I1123 08:59:32.762983 1244564 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:59:32.776304 1244564 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:59:32.783889 1244564 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:59:32.784007 1244564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:59:32.791763 1244564 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:59:32.791822 1244564 kubeadm.go:158] found existing configuration files:
	
	I1123 08:59:32.791913 1244564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:59:32.800153 1244564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:59:32.800224 1244564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:59:32.807913 1244564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:59:32.815338 1244564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:59:32.815405 1244564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:59:32.822791 1244564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:59:32.831343 1244564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:59:32.831465 1244564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:59:32.838625 1244564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:59:32.845903 1244564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:59:32.846022 1244564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:59:32.853097 1244564 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:59:32.928941 1244564 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 08:59:32.929236 1244564 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:59:32.999697 1244564 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1123 08:59:37.611835 1240463 node_ready.go:57] node "no-preload-591175" has "Ready":"False" status (will retry)
	W1123 08:59:39.612208 1240463 node_ready.go:57] node "no-preload-591175" has "Ready":"False" status (will retry)
	W1123 08:59:42.112980 1240463 node_ready.go:57] node "no-preload-591175" has "Ready":"False" status (will retry)
	I1123 08:59:42.616871 1240463 node_ready.go:49] node "no-preload-591175" is "Ready"
	I1123 08:59:42.616904 1240463 node_ready.go:38] duration metric: took 13.508818762s for node "no-preload-591175" to be "Ready" ...
	I1123 08:59:42.616917 1240463 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:59:42.616978 1240463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:59:42.636689 1240463 api_server.go:72] duration metric: took 15.399369798s to wait for apiserver process to appear ...
	I1123 08:59:42.636713 1240463 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:59:42.636732 1240463 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:59:42.650607 1240463 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 08:59:42.654990 1240463 api_server.go:141] control plane version: v1.34.1
	I1123 08:59:42.655022 1240463 api_server.go:131] duration metric: took 18.302131ms to wait for apiserver health ...
	I1123 08:59:42.655031 1240463 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:59:42.685942 1240463 system_pods.go:59] 8 kube-system pods found
	I1123 08:59:42.685983 1240463 system_pods.go:61] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Pending
	I1123 08:59:42.685990 1240463 system_pods.go:61] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 08:59:42.685994 1240463 system_pods.go:61] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 08:59:42.685999 1240463 system_pods.go:61] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running
	I1123 08:59:42.686005 1240463 system_pods.go:61] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running
	I1123 08:59:42.686008 1240463 system_pods.go:61] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 08:59:42.686012 1240463 system_pods.go:61] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running
	I1123 08:59:42.686016 1240463 system_pods.go:61] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Pending
	I1123 08:59:42.686023 1240463 system_pods.go:74] duration metric: took 30.985285ms to wait for pod list to return data ...
	I1123 08:59:42.686030 1240463 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:59:42.693937 1240463 default_sa.go:45] found service account: "default"
	I1123 08:59:42.693967 1240463 default_sa.go:55] duration metric: took 7.923099ms for default service account to be created ...
	I1123 08:59:42.693977 1240463 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:59:42.702771 1240463 system_pods.go:86] 8 kube-system pods found
	I1123 08:59:42.702802 1240463 system_pods.go:89] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Pending
	I1123 08:59:42.702818 1240463 system_pods.go:89] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 08:59:42.702823 1240463 system_pods.go:89] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 08:59:42.702828 1240463 system_pods.go:89] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running
	I1123 08:59:42.702832 1240463 system_pods.go:89] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running
	I1123 08:59:42.702836 1240463 system_pods.go:89] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 08:59:42.702841 1240463 system_pods.go:89] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running
	I1123 08:59:42.702855 1240463 system_pods.go:89] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:59:42.702870 1240463 retry.go:31] will retry after 259.422275ms: missing components: kube-dns
	I1123 08:59:42.966917 1240463 system_pods.go:86] 8 kube-system pods found
	I1123 08:59:42.966954 1240463 system_pods.go:89] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:59:42.966961 1240463 system_pods.go:89] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 08:59:42.966978 1240463 system_pods.go:89] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 08:59:42.966984 1240463 system_pods.go:89] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running
	I1123 08:59:42.966989 1240463 system_pods.go:89] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running
	I1123 08:59:42.966993 1240463 system_pods.go:89] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 08:59:42.966997 1240463 system_pods.go:89] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running
	I1123 08:59:42.967007 1240463 system_pods.go:89] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:59:42.967024 1240463 retry.go:31] will retry after 324.257357ms: missing components: kube-dns
	I1123 08:59:43.296386 1240463 system_pods.go:86] 8 kube-system pods found
	I1123 08:59:43.296427 1240463 system_pods.go:89] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:59:43.296434 1240463 system_pods.go:89] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 08:59:43.296440 1240463 system_pods.go:89] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 08:59:43.296444 1240463 system_pods.go:89] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running
	I1123 08:59:43.296449 1240463 system_pods.go:89] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running
	I1123 08:59:43.296453 1240463 system_pods.go:89] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 08:59:43.296457 1240463 system_pods.go:89] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running
	I1123 08:59:43.296463 1240463 system_pods.go:89] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:59:43.296518 1240463 retry.go:31] will retry after 479.546707ms: missing components: kube-dns
	I1123 08:59:43.781237 1240463 system_pods.go:86] 8 kube-system pods found
	I1123 08:59:43.781317 1240463 system_pods.go:89] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Running
	I1123 08:59:43.781347 1240463 system_pods.go:89] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 08:59:43.781365 1240463 system_pods.go:89] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 08:59:43.781392 1240463 system_pods.go:89] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running
	I1123 08:59:43.781422 1240463 system_pods.go:89] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running
	I1123 08:59:43.781439 1240463 system_pods.go:89] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 08:59:43.781459 1240463 system_pods.go:89] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running
	I1123 08:59:43.781495 1240463 system_pods.go:89] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Running
	I1123 08:59:43.781518 1240463 system_pods.go:126] duration metric: took 1.08753334s to wait for k8s-apps to be running ...
	I1123 08:59:43.781539 1240463 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:59:43.781622 1240463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:59:43.800796 1240463 system_svc.go:56] duration metric: took 19.248927ms WaitForService to wait for kubelet
	I1123 08:59:43.800878 1240463 kubeadm.go:587] duration metric: took 16.563561887s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:59:43.800911 1240463 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:59:43.804795 1240463 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:59:43.804876 1240463 node_conditions.go:123] node cpu capacity is 2
	I1123 08:59:43.804913 1240463 node_conditions.go:105] duration metric: took 3.980393ms to run NodePressure ...
	I1123 08:59:43.804951 1240463 start.go:242] waiting for startup goroutines ...
	I1123 08:59:43.804974 1240463 start.go:247] waiting for cluster config update ...
	I1123 08:59:43.804998 1240463 start.go:256] writing updated cluster config ...
	I1123 08:59:43.805361 1240463 ssh_runner.go:195] Run: rm -f paused
	I1123 08:59:43.813163 1240463 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:59:43.820576 1240463 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zwlsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:43.829163 1240463 pod_ready.go:94] pod "coredns-66bc5c9577-zwlsw" is "Ready"
	I1123 08:59:43.829238 1240463 pod_ready.go:86] duration metric: took 8.588254ms for pod "coredns-66bc5c9577-zwlsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:43.832094 1240463 pod_ready.go:83] waiting for pod "etcd-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:43.842097 1240463 pod_ready.go:94] pod "etcd-no-preload-591175" is "Ready"
	I1123 08:59:43.842178 1240463 pod_ready.go:86] duration metric: took 10.013142ms for pod "etcd-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:43.859270 1240463 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:43.864795 1240463 pod_ready.go:94] pod "kube-apiserver-no-preload-591175" is "Ready"
	I1123 08:59:43.864870 1240463 pod_ready.go:86] duration metric: took 5.528698ms for pod "kube-apiserver-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:43.867363 1240463 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:44.217778 1240463 pod_ready.go:94] pod "kube-controller-manager-no-preload-591175" is "Ready"
	I1123 08:59:44.217807 1240463 pod_ready.go:86] duration metric: took 350.386365ms for pod "kube-controller-manager-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:44.418190 1240463 pod_ready.go:83] waiting for pod "kube-proxy-rblwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:44.818151 1240463 pod_ready.go:94] pod "kube-proxy-rblwh" is "Ready"
	I1123 08:59:44.818174 1240463 pod_ready.go:86] duration metric: took 399.961435ms for pod "kube-proxy-rblwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:45.019120 1240463 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:45.418088 1240463 pod_ready.go:94] pod "kube-scheduler-no-preload-591175" is "Ready"
	I1123 08:59:45.418118 1240463 pod_ready.go:86] duration metric: took 398.966468ms for pod "kube-scheduler-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:45.418131 1240463 pod_ready.go:40] duration metric: took 1.604889038s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:59:45.519360 1240463 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:59:45.522647 1240463 out.go:179] * Done! kubectl is now configured to use "no-preload-591175" cluster and "default" namespace by default
	I1123 08:59:49.493897 1244564 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:59:49.493970 1244564 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:59:49.494094 1244564 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:59:49.494186 1244564 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:59:49.494237 1244564 kubeadm.go:319] OS: Linux
	I1123 08:59:49.494300 1244564 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:59:49.494355 1244564 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:59:49.494408 1244564 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:59:49.494460 1244564 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:59:49.494519 1244564 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:59:49.494579 1244564 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:59:49.494632 1244564 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:59:49.494705 1244564 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:59:49.494772 1244564 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:59:49.494860 1244564 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:59:49.494985 1244564 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:59:49.495106 1244564 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:59:49.495207 1244564 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:59:49.500179 1244564 out.go:252]   - Generating certificates and keys ...
	I1123 08:59:49.500272 1244564 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:59:49.500345 1244564 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:59:49.500415 1244564 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:59:49.500475 1244564 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:59:49.500539 1244564 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:59:49.500597 1244564 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:59:49.500654 1244564 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:59:49.500777 1244564 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-261704] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:59:49.500833 1244564 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:59:49.500954 1244564 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-261704] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:59:49.501022 1244564 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:59:49.501088 1244564 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:59:49.501134 1244564 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:59:49.501193 1244564 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:59:49.501247 1244564 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:59:49.501307 1244564 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:59:49.501374 1244564 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:59:49.501443 1244564 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:59:49.501502 1244564 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:59:49.501587 1244564 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:59:49.501653 1244564 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:59:49.504568 1244564 out.go:252]   - Booting up control plane ...
	I1123 08:59:49.504672 1244564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:59:49.504753 1244564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:59:49.504820 1244564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:59:49.504921 1244564 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:59:49.505013 1244564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:59:49.505131 1244564 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:59:49.505221 1244564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:59:49.505265 1244564 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:59:49.505393 1244564 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:59:49.505495 1244564 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:59:49.505552 1244564 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501089831s
	I1123 08:59:49.505643 1244564 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:59:49.505722 1244564 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 08:59:49.505810 1244564 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:59:49.505887 1244564 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:59:49.505962 1244564 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.277533567s
	I1123 08:59:49.506029 1244564 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.933753111s
	I1123 08:59:49.506096 1244564 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502512395s
	I1123 08:59:49.506199 1244564 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:59:49.506320 1244564 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:59:49.506389 1244564 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:59:49.506566 1244564 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-261704 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:59:49.506625 1244564 kubeadm.go:319] [bootstrap-token] Using token: oyh4ba.hikr7qjmyumlt8y0
	I1123 08:59:49.511549 1244564 out.go:252]   - Configuring RBAC rules ...
	I1123 08:59:49.511734 1244564 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:59:49.511829 1244564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:59:49.511989 1244564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:59:49.512155 1244564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:59:49.512286 1244564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:59:49.512372 1244564 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:59:49.512485 1244564 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:59:49.512534 1244564 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:59:49.512582 1244564 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:59:49.512589 1244564 kubeadm.go:319] 
	I1123 08:59:49.512646 1244564 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:59:49.512654 1244564 kubeadm.go:319] 
	I1123 08:59:49.512726 1244564 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:59:49.512733 1244564 kubeadm.go:319] 
	I1123 08:59:49.512757 1244564 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:59:49.512816 1244564 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:59:49.512867 1244564 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:59:49.512874 1244564 kubeadm.go:319] 
	I1123 08:59:49.512925 1244564 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:59:49.512933 1244564 kubeadm.go:319] 
	I1123 08:59:49.512978 1244564 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:59:49.512984 1244564 kubeadm.go:319] 
	I1123 08:59:49.513034 1244564 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:59:49.513107 1244564 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:59:49.513174 1244564 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:59:49.513181 1244564 kubeadm.go:319] 
	I1123 08:59:49.513260 1244564 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:59:49.513336 1244564 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:59:49.513343 1244564 kubeadm.go:319] 
	I1123 08:59:49.513423 1244564 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token oyh4ba.hikr7qjmyumlt8y0 \
	I1123 08:59:49.513524 1244564 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb \
	I1123 08:59:49.513546 1244564 kubeadm.go:319] 	--control-plane 
	I1123 08:59:49.513553 1244564 kubeadm.go:319] 
	I1123 08:59:49.513632 1244564 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:59:49.513640 1244564 kubeadm.go:319] 
	I1123 08:59:49.513717 1244564 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token oyh4ba.hikr7qjmyumlt8y0 \
	I1123 08:59:49.513829 1244564 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb 
	I1123 08:59:49.513841 1244564 cni.go:84] Creating CNI manager for ""
	I1123 08:59:49.513848 1244564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:59:49.517044 1244564 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:59:49.519970 1244564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:59:49.523843 1244564 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:59:49.523863 1244564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:59:49.536157 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:59:49.827486 1244564 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:59:49.827634 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:49.827716 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-261704 minikube.k8s.io/updated_at=2025_11_23T08_59_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=newest-cni-261704 minikube.k8s.io/primary=true
	I1123 08:59:49.993458 1244564 ops.go:34] apiserver oom_adj: -16
	I1123 08:59:49.993571 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:50.494170 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:50.994524 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:51.493860 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:51.994019 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:52.494107 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:52.994376 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:53.494269 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:53.585934 1244564 kubeadm.go:1114] duration metric: took 3.758351017s to wait for elevateKubeSystemPrivileges
	I1123 08:59:53.585975 1244564 kubeadm.go:403] duration metric: took 20.851060264s to StartCluster
	I1123 08:59:53.585993 1244564 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:53.586055 1244564 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:59:53.587001 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:53.587239 1244564 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:59:53.587322 1244564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:59:53.587549 1244564 config.go:182] Loaded profile config "newest-cni-261704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:59:53.587579 1244564 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:59:53.587634 1244564 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-261704"
	I1123 08:59:53.587650 1244564 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-261704"
	I1123 08:59:53.587669 1244564 host.go:66] Checking if "newest-cni-261704" exists ...
	I1123 08:59:53.587689 1244564 addons.go:70] Setting default-storageclass=true in profile "newest-cni-261704"
	I1123 08:59:53.587707 1244564 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-261704"
	I1123 08:59:53.588022 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Status}}
	I1123 08:59:53.588349 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Status}}
	I1123 08:59:53.590612 1244564 out.go:179] * Verifying Kubernetes components...
	I1123 08:59:53.599307 1244564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:53.627428 1244564 addons.go:239] Setting addon default-storageclass=true in "newest-cni-261704"
	I1123 08:59:53.627463 1244564 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:59:53.627465 1244564 host.go:66] Checking if "newest-cni-261704" exists ...
	I1123 08:59:53.627928 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Status}}
	I1123 08:59:53.631950 1244564 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:59:53.631975 1244564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:59:53.632038 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:53.651369 1244564 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:59:53.651392 1244564 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:59:53.651461 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:53.680486 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:53.695602 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:53.974048 1244564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:59:53.974218 1244564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:53.977204 1244564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:59:54.028136 1244564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:59:54.664510 1244564 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 08:59:54.665394 1244564 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:59:54.665566 1244564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:59:54.943839 1244564 api_server.go:72] duration metric: took 1.356569742s to wait for apiserver process to appear ...
	I1123 08:59:54.943866 1244564 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:59:54.943884 1244564 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:59:54.946709 1244564 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 08:59:54.950456 1244564 addons.go:530] duration metric: took 1.362867394s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 08:59:54.960176 1244564 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 08:59:54.961114 1244564 api_server.go:141] control plane version: v1.34.1
	I1123 08:59:54.961134 1244564 api_server.go:131] duration metric: took 17.26193ms to wait for apiserver health ...
	I1123 08:59:54.961143 1244564 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:59:54.964349 1244564 system_pods.go:59] 8 kube-system pods found
	I1123 08:59:54.964387 1244564 system_pods.go:61] "coredns-66bc5c9577-mdvx8" [aae4ba97-00dc-4620-818d-e571ed2a5b99] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:59:54.964398 1244564 system_pods.go:61] "etcd-newest-cni-261704" [ceed2430-2405-415c-9d8a-cbb9fec62bb3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:59:54.964407 1244564 system_pods.go:61] "kindnet-k7fsm" [7c5f3452-ed50-4a8d-82e3-51abceb3b21b] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:59:54.964415 1244564 system_pods.go:61] "kube-apiserver-newest-cni-261704" [b69d74bd-25b5-478e-a10e-e2c0b67c51d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:59:54.964428 1244564 system_pods.go:61] "kube-controller-manager-newest-cni-261704" [6b736ad3-cf70-428d-aabf-8635b1b3fabd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:59:54.964435 1244564 system_pods.go:61] "kube-proxy-wp8vw" [36630050-6d8d-433a-a3bc-77fc44b8484e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:59:54.964441 1244564 system_pods.go:61] "kube-scheduler-newest-cni-261704" [c824e0c6-1c1a-48a1-b05a-114c05052710] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:59:54.964450 1244564 system_pods.go:61] "storage-provisioner" [2afa132f-b478-4d70-9125-e632f2084e4e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:59:54.964456 1244564 system_pods.go:74] duration metric: took 3.30755ms to wait for pod list to return data ...
	I1123 08:59:54.964470 1244564 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:59:54.966620 1244564 default_sa.go:45] found service account: "default"
	I1123 08:59:54.966638 1244564 default_sa.go:55] duration metric: took 2.161908ms for default service account to be created ...
	I1123 08:59:54.966649 1244564 kubeadm.go:587] duration metric: took 1.379384754s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:59:54.966667 1244564 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:59:54.969127 1244564 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:59:54.969155 1244564 node_conditions.go:123] node cpu capacity is 2
	I1123 08:59:54.969168 1244564 node_conditions.go:105] duration metric: took 2.496627ms to run NodePressure ...
	I1123 08:59:54.969181 1244564 start.go:242] waiting for startup goroutines ...
	I1123 08:59:55.169789 1244564 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-261704" context rescaled to 1 replicas
	I1123 08:59:55.169830 1244564 start.go:247] waiting for cluster config update ...
	I1123 08:59:55.169867 1244564 start.go:256] writing updated cluster config ...
	I1123 08:59:55.170191 1244564 ssh_runner.go:195] Run: rm -f paused
	I1123 08:59:55.227798 1244564 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:59:55.233185 1244564 out.go:179] * Done! kubectl is now configured to use "newest-cni-261704" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.165827515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.173871481Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8e755371-4251-4761-beaa-5b007e957a64 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.1802277Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-wp8vw/POD" id=e5b804d5-eda5-4277-870b-02b601784706 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.180286488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.183571261Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e5b804d5-eda5-4277-870b-02b601784706 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.189660747Z" level=info msg="Ran pod sandbox 6c09dcf22b313644b13f90e3df66bba78a600094f234930f5f221b51ff3da12c with infra container: kube-system/kindnet-k7fsm/POD" id=8e755371-4251-4761-beaa-5b007e957a64 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.193107066Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=fbacfcb2-9deb-41eb-a6f3-24eda949e44d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.199395529Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d3a7b707-5e88-4c81-b04b-1a73b90a282f name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.205414789Z" level=info msg="Creating container: kube-system/kindnet-k7fsm/kindnet-cni" id=7e98d227-af9e-474d-a963-beb6257cde32 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.205621117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.207160758Z" level=info msg="Ran pod sandbox 6720d4417a5d4be16a7d2d0c63636d12ac60b216229547209ed894bfcf579737 with infra container: kube-system/kube-proxy-wp8vw/POD" id=e5b804d5-eda5-4277-870b-02b601784706 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.208472754Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8b35d21c-0fee-4795-aa74-5fc1c1b80e46 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.210495269Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=dd32571f-28f6-4a11-aa95-6454412e8254 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.215999344Z" level=info msg="Creating container: kube-system/kube-proxy-wp8vw/kube-proxy" id=81005ccd-0358-4a7c-946c-88f9876da10e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.216275127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.224257515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.22492404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.232801488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.23355816Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.273712252Z" level=info msg="Created container 55ab38111123e02f3051dd8042f46d5d127ea4c353c2d498e3aa07c6e8f7de9c: kube-system/kindnet-k7fsm/kindnet-cni" id=7e98d227-af9e-474d-a963-beb6257cde32 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.283809404Z" level=info msg="Starting container: 55ab38111123e02f3051dd8042f46d5d127ea4c353c2d498e3aa07c6e8f7de9c" id=359ef633-d93f-4127-a164-3d14fbf20da1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.285584861Z" level=info msg="Started container" PID=1484 containerID=55ab38111123e02f3051dd8042f46d5d127ea4c353c2d498e3aa07c6e8f7de9c description=kube-system/kindnet-k7fsm/kindnet-cni id=359ef633-d93f-4127-a164-3d14fbf20da1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c09dcf22b313644b13f90e3df66bba78a600094f234930f5f221b51ff3da12c
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.411461437Z" level=info msg="Created container 85476e594db8c4b4920b0ef75d6ad50c7767496dd94917c2e673d13b39330103: kube-system/kube-proxy-wp8vw/kube-proxy" id=81005ccd-0358-4a7c-946c-88f9876da10e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.41250473Z" level=info msg="Starting container: 85476e594db8c4b4920b0ef75d6ad50c7767496dd94917c2e673d13b39330103" id=aec2b5aa-c306-442b-b4ca-d27b637410e4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:59:54 newest-cni-261704 crio[838]: time="2025-11-23T08:59:54.426012969Z" level=info msg="Started container" PID=1485 containerID=85476e594db8c4b4920b0ef75d6ad50c7767496dd94917c2e673d13b39330103 description=kube-system/kube-proxy-wp8vw/kube-proxy id=aec2b5aa-c306-442b-b4ca-d27b637410e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6720d4417a5d4be16a7d2d0c63636d12ac60b216229547209ed894bfcf579737
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	55ab38111123e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   6c09dcf22b313       kindnet-k7fsm                               kube-system
	85476e594db8c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   6720d4417a5d4       kube-proxy-wp8vw                            kube-system
	5beca1f0b2559       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      0                   ecb98be0a06bd       etcd-newest-cni-261704                      kube-system
	0e888872d4604       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   0                   aadd8a832cda4       kube-controller-manager-newest-cni-261704   kube-system
	b5bc9309ea6bd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            0                   9b9134416817a       kube-scheduler-newest-cni-261704            kube-system
	0f8e741e44979       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            0                   943db46239d39       kube-apiserver-newest-cni-261704            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-261704
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-261704
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=newest-cni-261704
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_59_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:59:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-261704
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:59:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:59:49 +0000   Sun, 23 Nov 2025 08:59:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:59:49 +0000   Sun, 23 Nov 2025 08:59:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:59:49 +0000   Sun, 23 Nov 2025 08:59:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 08:59:49 +0000   Sun, 23 Nov 2025 08:59:41 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-261704
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                56e8b4b2-8e75-46e0-8d33-48b3ccd6ced8
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-261704                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7s
	  kube-system                 kindnet-k7fsm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-261704             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-261704    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-wp8vw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-261704             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 2s    kube-proxy       
	  Normal   Starting                 8s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7s    kubelet          Node newest-cni-261704 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s    kubelet          Node newest-cni-261704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s    kubelet          Node newest-cni-261704 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s    node-controller  Node newest-cni-261704 event: Registered Node newest-cni-261704 in Controller
	
	
	==> dmesg <==
	[Nov23 08:37] overlayfs: idmapped layers are currently not supported
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	[Nov23 08:55] overlayfs: idmapped layers are currently not supported
	[Nov23 08:56] overlayfs: idmapped layers are currently not supported
	[Nov23 08:57] overlayfs: idmapped layers are currently not supported
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	[ +37.440319] overlayfs: idmapped layers are currently not supported
	[Nov23 08:59] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5beca1f0b2559fc84c3e3932872fbec00b4d903799f08799d7c8d5b57547f14d] <==
	{"level":"warn","ts":"2025-11-23T08:59:44.206408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.221381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.246095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.258765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.290485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.303560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.321270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.362636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.364448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.407925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.431501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.460145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.504533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.536592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.556404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.571623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.586705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.607037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.618477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.640924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.675225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.700744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.735685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.770232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:44.905413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38868","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:59:57 up  9:42,  0 user,  load average: 4.09, 3.42, 2.82
	Linux newest-cni-261704 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [55ab38111123e02f3051dd8042f46d5d127ea4c353c2d498e3aa07c6e8f7de9c] <==
	I1123 08:59:54.374283       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:59:54.375188       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:59:54.427532       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:59:54.427565       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:59:54.427578       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:59:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:59:54.546006       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:59:54.546032       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:59:54.546041       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:59:54.546369       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [0f8e741e44979903bc326f9f6e9c4022d2cc3d88bdda7a702ecfcb6c5f7b9131] <==
	I1123 08:59:46.306852       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 08:59:46.306989       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:59:46.369511       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:59:46.408619       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:59:46.413596       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:59:46.422997       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:59:46.442895       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:59:46.443045       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:59:46.834956       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:59:46.839981       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:59:46.840004       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:59:47.510577       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:59:47.557359       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:59:47.670977       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:59:47.678670       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 08:59:47.679872       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:59:47.685135       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:59:47.922439       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:59:48.918772       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:59:48.936924       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:59:48.951834       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:59:53.653081       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:59:53.780259       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:59:53.852480       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:59:54.063384       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0e888872d4604a02cf99fe613fb08d38243e3c2f5138f695ab9709f3001aa8dc] <==
	I1123 08:59:52.932733       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:59:52.933281       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:59:52.940382       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:59:52.945793       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-261704" podCIDRs=["10.42.0.0/24"]
	I1123 08:59:52.950329       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:59:52.952459       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:59:52.958797       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:59:52.961056       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:59:52.969719       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:59:52.969817       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:59:52.969848       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:59:52.969735       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:59:52.971862       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:59:52.972164       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:59:52.972225       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:59:52.972236       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:59:52.972250       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:59:52.972259       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:59:52.972607       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:59:52.972621       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:59:52.972635       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:59:52.973745       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:59:52.975715       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:59:52.977985       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:59:52.983307       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [85476e594db8c4b4920b0ef75d6ad50c7767496dd94917c2e673d13b39330103] <==
	I1123 08:59:54.491102       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:59:54.588658       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:59:54.689507       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:59:54.689557       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 08:59:54.689638       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:59:54.823771       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:59:54.823825       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:59:54.842436       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:59:54.842991       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:59:54.843015       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:59:54.844476       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:59:54.844552       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:59:54.844978       1 config.go:200] "Starting service config controller"
	I1123 08:59:54.847230       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:59:54.848896       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:59:54.848974       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:59:54.854593       1 config.go:309] "Starting node config controller"
	I1123 08:59:54.854676       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:59:54.854706       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:59:54.946072       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:59:54.951385       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:59:54.951717       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b5bc9309ea6bd2c6a8fd8269028b92305782168feaf3439c376892dd000aa74c] <==
	I1123 08:59:46.490557       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:59:46.493003       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:59:46.493068       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:59:46.493843       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:59:46.493906       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 08:59:46.504745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:59:46.504879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:59:46.504956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:59:46.505033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:59:46.505101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:59:46.505188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:59:46.505259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:59:46.505329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:59:46.505399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:59:46.505548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:59:46.505631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:59:46.505711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:59:46.505883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:59:46.505966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:59:46.506034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:59:46.506122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:59:46.506196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:59:46.506327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:59:46.506495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 08:59:48.093331       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:59:49 newest-cni-261704 kubelet[1305]: I1123 08:59:49.262789    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/e4270794b56a92829fa51bbff8985f77-etcd-certs\") pod \"etcd-newest-cni-261704\" (UID: \"e4270794b56a92829fa51bbff8985f77\") " pod="kube-system/etcd-newest-cni-261704"
	Nov 23 08:59:49 newest-cni-261704 kubelet[1305]: I1123 08:59:49.827861    1305 apiserver.go:52] "Watching apiserver"
	Nov 23 08:59:49 newest-cni-261704 kubelet[1305]: I1123 08:59:49.860246    1305 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 08:59:49 newest-cni-261704 kubelet[1305]: I1123 08:59:49.982490    1305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-261704"
	Nov 23 08:59:49 newest-cni-261704 kubelet[1305]: I1123 08:59:49.982729    1305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-261704"
	Nov 23 08:59:50 newest-cni-261704 kubelet[1305]: E1123 08:59:50.001199    1305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-261704\" already exists" pod="kube-system/kube-apiserver-newest-cni-261704"
	Nov 23 08:59:50 newest-cni-261704 kubelet[1305]: E1123 08:59:50.001864    1305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-261704\" already exists" pod="kube-system/kube-scheduler-newest-cni-261704"
	Nov 23 08:59:50 newest-cni-261704 kubelet[1305]: I1123 08:59:50.038859    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-261704" podStartSLOduration=1.038809482 podStartE2EDuration="1.038809482s" podCreationTimestamp="2025-11-23 08:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:59:50.025125281 +0000 UTC m=+1.295291301" watchObservedRunningTime="2025-11-23 08:59:50.038809482 +0000 UTC m=+1.308975502"
	Nov 23 08:59:50 newest-cni-261704 kubelet[1305]: I1123 08:59:50.039201    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-261704" podStartSLOduration=1.039112726 podStartE2EDuration="1.039112726s" podCreationTimestamp="2025-11-23 08:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:59:50.036902434 +0000 UTC m=+1.307068446" watchObservedRunningTime="2025-11-23 08:59:50.039112726 +0000 UTC m=+1.309278755"
	Nov 23 08:59:50 newest-cni-261704 kubelet[1305]: I1123 08:59:50.080977    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-261704" podStartSLOduration=1.080956462 podStartE2EDuration="1.080956462s" podCreationTimestamp="2025-11-23 08:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:59:50.051245135 +0000 UTC m=+1.321411155" watchObservedRunningTime="2025-11-23 08:59:50.080956462 +0000 UTC m=+1.351122474"
	Nov 23 08:59:50 newest-cni-261704 kubelet[1305]: I1123 08:59:50.115756    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-261704" podStartSLOduration=1.115739137 podStartE2EDuration="1.115739137s" podCreationTimestamp="2025-11-23 08:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:59:50.0830542 +0000 UTC m=+1.353220220" watchObservedRunningTime="2025-11-23 08:59:50.115739137 +0000 UTC m=+1.385905174"
	Nov 23 08:59:53 newest-cni-261704 kubelet[1305]: I1123 08:59:53.038463    1305 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 08:59:53 newest-cni-261704 kubelet[1305]: I1123 08:59:53.039040    1305 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 08:59:53 newest-cni-261704 kubelet[1305]: I1123 08:59:53.905265    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c5f3452-ed50-4a8d-82e3-51abceb3b21b-xtables-lock\") pod \"kindnet-k7fsm\" (UID: \"7c5f3452-ed50-4a8d-82e3-51abceb3b21b\") " pod="kube-system/kindnet-k7fsm"
	Nov 23 08:59:53 newest-cni-261704 kubelet[1305]: I1123 08:59:53.905305    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36630050-6d8d-433a-a3bc-77fc44b8484e-kube-proxy\") pod \"kube-proxy-wp8vw\" (UID: \"36630050-6d8d-433a-a3bc-77fc44b8484e\") " pod="kube-system/kube-proxy-wp8vw"
	Nov 23 08:59:53 newest-cni-261704 kubelet[1305]: I1123 08:59:53.905324    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7ssk\" (UniqueName: \"kubernetes.io/projected/36630050-6d8d-433a-a3bc-77fc44b8484e-kube-api-access-k7ssk\") pod \"kube-proxy-wp8vw\" (UID: \"36630050-6d8d-433a-a3bc-77fc44b8484e\") " pod="kube-system/kube-proxy-wp8vw"
	Nov 23 08:59:53 newest-cni-261704 kubelet[1305]: I1123 08:59:53.905347    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvp6n\" (UniqueName: \"kubernetes.io/projected/7c5f3452-ed50-4a8d-82e3-51abceb3b21b-kube-api-access-kvp6n\") pod \"kindnet-k7fsm\" (UID: \"7c5f3452-ed50-4a8d-82e3-51abceb3b21b\") " pod="kube-system/kindnet-k7fsm"
	Nov 23 08:59:53 newest-cni-261704 kubelet[1305]: I1123 08:59:53.905378    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36630050-6d8d-433a-a3bc-77fc44b8484e-xtables-lock\") pod \"kube-proxy-wp8vw\" (UID: \"36630050-6d8d-433a-a3bc-77fc44b8484e\") " pod="kube-system/kube-proxy-wp8vw"
	Nov 23 08:59:53 newest-cni-261704 kubelet[1305]: I1123 08:59:53.905398    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7c5f3452-ed50-4a8d-82e3-51abceb3b21b-cni-cfg\") pod \"kindnet-k7fsm\" (UID: \"7c5f3452-ed50-4a8d-82e3-51abceb3b21b\") " pod="kube-system/kindnet-k7fsm"
	Nov 23 08:59:53 newest-cni-261704 kubelet[1305]: I1123 08:59:53.905413    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c5f3452-ed50-4a8d-82e3-51abceb3b21b-lib-modules\") pod \"kindnet-k7fsm\" (UID: \"7c5f3452-ed50-4a8d-82e3-51abceb3b21b\") " pod="kube-system/kindnet-k7fsm"
	Nov 23 08:59:53 newest-cni-261704 kubelet[1305]: I1123 08:59:53.905814    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36630050-6d8d-433a-a3bc-77fc44b8484e-lib-modules\") pod \"kube-proxy-wp8vw\" (UID: \"36630050-6d8d-433a-a3bc-77fc44b8484e\") " pod="kube-system/kube-proxy-wp8vw"
	Nov 23 08:59:54 newest-cni-261704 kubelet[1305]: I1123 08:59:54.054436    1305 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 08:59:54 newest-cni-261704 kubelet[1305]: W1123 08:59:54.201817    1305 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/crio-6720d4417a5d4be16a7d2d0c63636d12ac60b216229547209ed894bfcf579737 WatchSource:0}: Error finding container 6720d4417a5d4be16a7d2d0c63636d12ac60b216229547209ed894bfcf579737: Status 404 returned error can't find the container with id 6720d4417a5d4be16a7d2d0c63636d12ac60b216229547209ed894bfcf579737
	Nov 23 08:59:55 newest-cni-261704 kubelet[1305]: I1123 08:59:55.019493    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wp8vw" podStartSLOduration=2.019474946 podStartE2EDuration="2.019474946s" podCreationTimestamp="2025-11-23 08:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:59:55.019169191 +0000 UTC m=+6.289335309" watchObservedRunningTime="2025-11-23 08:59:55.019474946 +0000 UTC m=+6.289640966"
	Nov 23 08:59:57 newest-cni-261704 kubelet[1305]: I1123 08:59:57.002336    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-k7fsm" podStartSLOduration=4.002313242 podStartE2EDuration="4.002313242s" podCreationTimestamp="2025-11-23 08:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:59:55.043009823 +0000 UTC m=+6.313175843" watchObservedRunningTime="2025-11-23 08:59:57.002313242 +0000 UTC m=+8.272479262"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-261704 -n newest-cni-261704
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-261704 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-mdvx8 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-261704 describe pod coredns-66bc5c9577-mdvx8 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-261704 describe pod coredns-66bc5c9577-mdvx8 storage-provisioner: exit status 1 (111.863218ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-mdvx8" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-261704 describe pod coredns-66bc5c9577-mdvx8 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-591175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-591175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (350.714979ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:59:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-591175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-591175 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-591175 describe deploy/metrics-server -n kube-system: exit status 1 (112.961239ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-591175 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-591175
helpers_test.go:243: (dbg) docker inspect no-preload-591175:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369",
	        "Created": "2025-11-23T08:58:38.098322261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1240758,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:58:38.180587064Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/hostname",
	        "HostsPath": "/var/lib/docker/containers/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/hosts",
	        "LogPath": "/var/lib/docker/containers/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369-json.log",
	        "Name": "/no-preload-591175",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-591175:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-591175",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369",
	                "LowerDir": "/var/lib/docker/overlay2/771f258756c2bbb7a52acc018af18f3945b3a6a6c890b53f5dd366fd3977c014-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/771f258756c2bbb7a52acc018af18f3945b3a6a6c890b53f5dd366fd3977c014/merged",
	                "UpperDir": "/var/lib/docker/overlay2/771f258756c2bbb7a52acc018af18f3945b3a6a6c890b53f5dd366fd3977c014/diff",
	                "WorkDir": "/var/lib/docker/overlay2/771f258756c2bbb7a52acc018af18f3945b3a6a6c890b53f5dd366fd3977c014/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-591175",
	                "Source": "/var/lib/docker/volumes/no-preload-591175/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-591175",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-591175",
	                "name.minikube.sigs.k8s.io": "no-preload-591175",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5bb8d2e73039e72ffda8691c9fe990ca992dc0e03826838931062cb3f1a6990f",
	            "SandboxKey": "/var/run/docker/netns/5bb8d2e73039",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34542"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34543"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34546"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34544"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34545"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-591175": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:79:50:a3:e7:4e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5cb890fde481b5761669b16b762b3e0bbd64d2ef935451546915fdbb684d58af",
	                    "EndpointID": "7defee76c2ca71cbfeb85613691c57dac03be47f4ef983248d2cc4cbfaf242f4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-591175",
	                        "14f3744363b8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-591175 -n no-preload-591175
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-591175 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-591175 logs -n 25: (1.558558255s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-expiration-322507 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-expiration-322507                                                                                                                                                                                                                     │ cert-expiration-322507       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-262764 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-262764 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-262764 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ start   │ -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable metrics-server -p embed-certs-879861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p embed-certs-879861 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-879861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ image   │ default-k8s-diff-port-262764 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ pause   │ -p default-k8s-diff-port-262764 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p disable-driver-mounts-880590                                                                                                                                                                                                               │ disable-driver-mounts-880590 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:59 UTC │
	│ image   │ embed-certs-879861 image list --format=json                                                                                                                                                                                                   │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ pause   │ -p embed-certs-879861 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ delete  │ -p embed-certs-879861                                                                                                                                                                                                                         │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p embed-certs-879861                                                                                                                                                                                                                         │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ addons  │ enable metrics-server -p newest-cni-261704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-591175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:59:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:59:17.452684 1244564 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:59:17.452899 1244564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:17.452926 1244564 out.go:374] Setting ErrFile to fd 2...
	I1123 08:59:17.452946 1244564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:17.453215 1244564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:59:17.453682 1244564 out.go:368] Setting JSON to false
	I1123 08:59:17.454726 1244564 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34903,"bootTime":1763853455,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:59:17.454820 1244564 start.go:143] virtualization:  
	I1123 08:59:17.459172 1244564 out.go:179] * [newest-cni-261704] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:59:17.462622 1244564 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:59:17.462722 1244564 notify.go:221] Checking for updates...
	I1123 08:59:17.469143 1244564 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:59:17.472274 1244564 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:59:17.475342 1244564 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:59:17.478400 1244564 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:59:17.481504 1244564 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:59:17.484979 1244564 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:59:17.485123 1244564 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:59:17.530301 1244564 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:59:17.530484 1244564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:17.630889 1244564 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:59:17.616958979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:59:17.630991 1244564 docker.go:319] overlay module found
	I1123 08:59:17.634191 1244564 out.go:179] * Using the docker driver based on user configuration
	I1123 08:59:17.637148 1244564 start.go:309] selected driver: docker
	I1123 08:59:17.637166 1244564 start.go:927] validating driver "docker" against <nil>
	I1123 08:59:17.637180 1244564 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:59:17.637886 1244564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:17.728212 1244564 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:59:17.718907792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:59:17.728373 1244564 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1123 08:59:17.728398 1244564 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1123 08:59:17.728633 1244564 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:59:17.733216 1244564 out.go:179] * Using Docker driver with root privileges
	I1123 08:59:17.736084 1244564 cni.go:84] Creating CNI manager for ""
	I1123 08:59:17.736155 1244564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:59:17.736166 1244564 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:59:17.736246 1244564 start.go:353] cluster config:
	{Name:newest-cni-261704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-261704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:17.739470 1244564 out.go:179] * Starting "newest-cni-261704" primary control-plane node in "newest-cni-261704" cluster
	I1123 08:59:17.742381 1244564 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:59:17.745320 1244564 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:59:17.748124 1244564 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:59:17.748178 1244564 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 08:59:17.748191 1244564 cache.go:65] Caching tarball of preloaded images
	I1123 08:59:17.748271 1244564 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:59:17.748286 1244564 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:59:17.748402 1244564 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/config.json ...
	I1123 08:59:17.748425 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/config.json: {Name:mkcebbfe251a76b43ceb568921f830f3797ff098 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:17.748578 1244564 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:59:17.774653 1244564 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:59:17.774677 1244564 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:59:17.774697 1244564 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:59:17.774726 1244564 start.go:360] acquireMachinesLock for newest-cni-261704: {Name:mkc157815d36ad5358be83723f2b82d59972bd12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:59:17.774827 1244564 start.go:364] duration metric: took 81.606µs to acquireMachinesLock for "newest-cni-261704"
	I1123 08:59:17.774857 1244564 start.go:93] Provisioning new machine with config: &{Name:newest-cni-261704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-261704 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:59:17.774927 1244564 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:59:17.778434 1244564 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:59:17.778664 1244564 start.go:159] libmachine.API.Create for "newest-cni-261704" (driver="docker")
	I1123 08:59:17.778707 1244564 client.go:173] LocalClient.Create starting
	I1123 08:59:17.778768 1244564 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem
	I1123 08:59:17.778825 1244564 main.go:143] libmachine: Decoding PEM data...
	I1123 08:59:17.778849 1244564 main.go:143] libmachine: Parsing certificate...
	I1123 08:59:17.778900 1244564 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem
	I1123 08:59:17.778922 1244564 main.go:143] libmachine: Decoding PEM data...
	I1123 08:59:17.778938 1244564 main.go:143] libmachine: Parsing certificate...
	I1123 08:59:17.779331 1244564 cli_runner.go:164] Run: docker network inspect newest-cni-261704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:59:17.796536 1244564 cli_runner.go:211] docker network inspect newest-cni-261704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:59:17.796624 1244564 network_create.go:284] running [docker network inspect newest-cni-261704] to gather additional debugging logs...
	I1123 08:59:17.796644 1244564 cli_runner.go:164] Run: docker network inspect newest-cni-261704
	W1123 08:59:17.810963 1244564 cli_runner.go:211] docker network inspect newest-cni-261704 returned with exit code 1
	I1123 08:59:17.810996 1244564 network_create.go:287] error running [docker network inspect newest-cni-261704]: docker network inspect newest-cni-261704: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-261704 not found
	I1123 08:59:17.811009 1244564 network_create.go:289] output of [docker network inspect newest-cni-261704]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-261704 not found
	
	** /stderr **
	I1123 08:59:17.811118 1244564 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:17.826871 1244564 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-32d396d9f7df IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:9b:29:4a:5c:ab} reservation:<nil>}
	I1123 08:59:17.827176 1244564 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-859c97accd92 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:ea:cf:62:f4:f8} reservation:<nil>}
	I1123 08:59:17.827546 1244564 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-50e966d7b39a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:1d:b6:b9:b9:ef} reservation:<nil>}
	I1123 08:59:17.827958 1244564 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e66d0}
	I1123 08:59:17.827982 1244564 network_create.go:124] attempt to create docker network newest-cni-261704 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 08:59:17.828044 1244564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-261704 newest-cni-261704
	I1123 08:59:17.897302 1244564 network_create.go:108] docker network newest-cni-261704 192.168.76.0/24 created
	I1123 08:59:17.897338 1244564 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-261704" container
	I1123 08:59:17.897415 1244564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:59:17.917695 1244564 cli_runner.go:164] Run: docker volume create newest-cni-261704 --label name.minikube.sigs.k8s.io=newest-cni-261704 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:59:17.946775 1244564 oci.go:103] Successfully created a docker volume newest-cni-261704
	I1123 08:59:17.946872 1244564 cli_runner.go:164] Run: docker run --rm --name newest-cni-261704-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-261704 --entrypoint /usr/bin/test -v newest-cni-261704:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:59:18.596670 1244564 oci.go:107] Successfully prepared a docker volume newest-cni-261704
	I1123 08:59:18.596752 1244564 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:59:18.596763 1244564 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:59:18.596833 1244564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-261704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:59:22.772927 1240463 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:59:22.773007 1240463 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:59:22.773122 1240463 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:59:22.773195 1240463 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:59:22.773240 1240463 kubeadm.go:319] OS: Linux
	I1123 08:59:22.773291 1240463 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:59:22.773346 1240463 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:59:22.773396 1240463 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:59:22.773455 1240463 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:59:22.773509 1240463 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:59:22.773567 1240463 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:59:22.773623 1240463 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:59:22.773677 1240463 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:59:22.773736 1240463 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:59:22.773822 1240463 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:59:22.773938 1240463 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:59:22.774067 1240463 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:59:22.774147 1240463 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:59:22.789914 1240463 out.go:252]   - Generating certificates and keys ...
	I1123 08:59:22.790057 1240463 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:59:22.790123 1240463 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:59:22.790201 1240463 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:59:22.790266 1240463 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:59:22.790343 1240463 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:59:22.790394 1240463 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:59:22.790449 1240463 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:59:22.790599 1240463 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-591175] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:59:22.790658 1240463 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:59:22.790786 1240463 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-591175] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:59:22.790857 1240463 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:59:22.790926 1240463 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:59:22.790976 1240463 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:59:22.791032 1240463 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:59:22.791082 1240463 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:59:22.791151 1240463 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:59:22.791302 1240463 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:59:22.791367 1240463 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:59:22.791421 1240463 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:59:22.791511 1240463 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:59:22.791577 1240463 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:59:22.825528 1240463 out.go:252]   - Booting up control plane ...
	I1123 08:59:22.825645 1240463 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:59:22.825736 1240463 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:59:22.825813 1240463 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:59:22.825928 1240463 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:59:22.826031 1240463 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:59:22.826146 1240463 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:59:22.826240 1240463 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:59:22.826286 1240463 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:59:22.826429 1240463 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:59:22.826543 1240463 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:59:22.826610 1240463 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001104823s
	I1123 08:59:22.826712 1240463 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:59:22.826802 1240463 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 08:59:22.826898 1240463 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:59:22.826982 1240463 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:59:22.827062 1240463 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.133732751s
	I1123 08:59:22.827133 1240463 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.57464497s
	I1123 08:59:22.827237 1240463 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.50245645s
	I1123 08:59:22.827402 1240463 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:59:22.827571 1240463 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:59:22.827654 1240463 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:59:22.827844 1240463 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-591175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:59:22.827928 1240463 kubeadm.go:319] [bootstrap-token] Using token: prto18.avdat22o9zcyjdgf
	I1123 08:59:22.853241 1240463 out.go:252]   - Configuring RBAC rules ...
	I1123 08:59:22.853370 1240463 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:59:22.853463 1240463 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:59:22.853610 1240463 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:59:22.853767 1240463 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:59:22.853898 1240463 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:59:22.854027 1240463 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:59:22.854156 1240463 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:59:22.854214 1240463 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:59:22.854271 1240463 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:59:22.854282 1240463 kubeadm.go:319] 
	I1123 08:59:22.854363 1240463 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:59:22.854381 1240463 kubeadm.go:319] 
	I1123 08:59:22.854465 1240463 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:59:22.854472 1240463 kubeadm.go:319] 
	I1123 08:59:22.854498 1240463 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:59:22.854587 1240463 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:59:22.854686 1240463 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:59:22.854701 1240463 kubeadm.go:319] 
	I1123 08:59:22.854765 1240463 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:59:22.854772 1240463 kubeadm.go:319] 
	I1123 08:59:22.854853 1240463 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:59:22.854866 1240463 kubeadm.go:319] 
	I1123 08:59:22.854924 1240463 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:59:22.855013 1240463 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:59:22.855090 1240463 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:59:22.855097 1240463 kubeadm.go:319] 
	I1123 08:59:22.855262 1240463 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:59:22.855355 1240463 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:59:22.855362 1240463 kubeadm.go:319] 
	I1123 08:59:22.855460 1240463 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token prto18.avdat22o9zcyjdgf \
	I1123 08:59:22.855577 1240463 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb \
	I1123 08:59:22.855603 1240463 kubeadm.go:319] 	--control-plane 
	I1123 08:59:22.855610 1240463 kubeadm.go:319] 
	I1123 08:59:22.855701 1240463 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:59:22.855709 1240463 kubeadm.go:319] 
	I1123 08:59:22.855797 1240463 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token prto18.avdat22o9zcyjdgf \
	I1123 08:59:22.855925 1240463 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb 
	I1123 08:59:22.855937 1240463 cni.go:84] Creating CNI manager for ""
	I1123 08:59:22.855945 1240463 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:59:22.887198 1240463 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:59:22.914726 1240463 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:59:22.918859 1240463 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:59:22.918877 1240463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:59:22.932290 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:59:23.253021 1240463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:59:23.253117 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:23.253156 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-591175 minikube.k8s.io/updated_at=2025_11_23T08_59_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=no-preload-591175 minikube.k8s.io/primary=true
	I1123 08:59:23.284136 1240463 ops.go:34] apiserver oom_adj: -16
	I1123 08:59:23.621460 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:24.122108 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:24.621646 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:25.121502 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:25.622097 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:26.121650 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:26.621704 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:27.121638 1240463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:27.236338 1240463 kubeadm.go:1114] duration metric: took 3.983274333s to wait for elevateKubeSystemPrivileges
	I1123 08:59:27.236364 1240463 kubeadm.go:403] duration metric: took 25.412811168s to StartCluster
	I1123 08:59:27.236384 1240463 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:27.236448 1240463 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:59:27.237079 1240463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:27.237291 1240463 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:59:27.237428 1240463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:59:27.237673 1240463 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:59:27.237639 1240463 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:59:27.237720 1240463 addons.go:70] Setting default-storageclass=true in profile "no-preload-591175"
	I1123 08:59:27.237740 1240463 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-591175"
	I1123 08:59:27.238058 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 08:59:27.237720 1240463 addons.go:70] Setting storage-provisioner=true in profile "no-preload-591175"
	I1123 08:59:27.238477 1240463 addons.go:239] Setting addon storage-provisioner=true in "no-preload-591175"
	I1123 08:59:27.238508 1240463 host.go:66] Checking if "no-preload-591175" exists ...
	I1123 08:59:27.238917 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 08:59:27.240449 1240463 out.go:179] * Verifying Kubernetes components...
	I1123 08:59:27.244144 1240463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:27.269242 1240463 addons.go:239] Setting addon default-storageclass=true in "no-preload-591175"
	I1123 08:59:27.269283 1240463 host.go:66] Checking if "no-preload-591175" exists ...
	I1123 08:59:27.269696 1240463 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 08:59:27.287040 1240463 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:59:23.285755 1244564 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-261704:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.688886773s)
	I1123 08:59:23.285788 1244564 kic.go:203] duration metric: took 4.689022326s to extract preloaded images to volume ...
	W1123 08:59:23.285918 1244564 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:59:23.286011 1244564 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:59:23.379608 1244564 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-261704 --name newest-cni-261704 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-261704 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-261704 --network newest-cni-261704 --ip 192.168.76.2 --volume newest-cni-261704:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:59:23.773470 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Running}}
	I1123 08:59:23.791151 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Status}}
	I1123 08:59:23.812608 1244564 cli_runner.go:164] Run: docker exec newest-cni-261704 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:59:23.874956 1244564 oci.go:144] the created container "newest-cni-261704" has a running status.
	I1123 08:59:23.874985 1244564 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa...
	I1123 08:59:24.036379 1244564 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:59:24.064859 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Status}}
	I1123 08:59:24.093693 1244564 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:59:24.093719 1244564 kic_runner.go:114] Args: [docker exec --privileged newest-cni-261704 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:59:24.167294 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Status}}
	I1123 08:59:24.186569 1244564 machine.go:94] provisionDockerMachine start ...
	I1123 08:59:24.186671 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:24.215125 1244564 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:24.219292 1244564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1123 08:59:24.219312 1244564 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:59:24.220081 1244564 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:59:27.426651 1244564 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-261704
	
	I1123 08:59:27.426674 1244564 ubuntu.go:182] provisioning hostname "newest-cni-261704"
	I1123 08:59:27.426736 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:27.293253 1240463 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:59:27.293287 1240463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:59:27.293351 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:59:27.308661 1240463 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:59:27.308682 1240463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:59:27.308741 1240463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 08:59:27.331293 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:59:27.347595 1240463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 08:59:27.670666 1240463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:59:27.731463 1240463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:59:27.808562 1240463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:59:27.808679 1240463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:29.106182 1240463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.374683804s)
	I1123 08:59:29.106268 1240463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.297678176s)
	I1123 08:59:29.106286 1240463 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 08:59:29.107443 1240463 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.298741203s)
	I1123 08:59:29.108068 1240463 node_ready.go:35] waiting up to 6m0s for node "no-preload-591175" to be "Ready" ...
	I1123 08:59:29.111270 1240463 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 08:59:27.454375 1244564 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:27.454684 1244564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1123 08:59:27.454695 1244564 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-261704 && echo "newest-cni-261704" | sudo tee /etc/hostname
	I1123 08:59:27.661034 1244564 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-261704
	
	I1123 08:59:27.661157 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:27.690249 1244564 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:27.690555 1244564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1123 08:59:27.690577 1244564 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-261704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-261704/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-261704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:59:27.871593 1244564 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:59:27.871622 1244564 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 08:59:27.871664 1244564 ubuntu.go:190] setting up certificates
	I1123 08:59:27.871678 1244564 provision.go:84] configureAuth start
	I1123 08:59:27.871756 1244564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-261704
	I1123 08:59:27.896065 1244564 provision.go:143] copyHostCerts
	I1123 08:59:27.896137 1244564 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 08:59:27.896150 1244564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 08:59:27.896229 1244564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 08:59:27.896327 1244564 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 08:59:27.896338 1244564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 08:59:27.896366 1244564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 08:59:27.896431 1244564 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 08:59:27.896440 1244564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 08:59:27.896464 1244564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 08:59:27.896523 1244564 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.newest-cni-261704 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-261704]
	I1123 08:59:28.047101 1244564 provision.go:177] copyRemoteCerts
	I1123 08:59:28.047172 1244564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:59:28.047238 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:28.071237 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:28.183499 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:59:28.208122 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:59:28.235662 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:59:28.261414 1244564 provision.go:87] duration metric: took 389.708755ms to configureAuth
	I1123 08:59:28.261455 1244564 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:59:28.261674 1244564 config.go:182] Loaded profile config "newest-cni-261704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:59:28.261790 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:28.283704 1244564 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:28.284040 1244564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34547 <nil> <nil>}
	I1123 08:59:28.284061 1244564 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:59:28.674972 1244564 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:59:28.674995 1244564 machine.go:97] duration metric: took 4.488406812s to provisionDockerMachine
	I1123 08:59:28.675006 1244564 client.go:176] duration metric: took 10.89628715s to LocalClient.Create
	I1123 08:59:28.675036 1244564 start.go:167] duration metric: took 10.896373572s to libmachine.API.Create "newest-cni-261704"
	I1123 08:59:28.675046 1244564 start.go:293] postStartSetup for "newest-cni-261704" (driver="docker")
	I1123 08:59:28.675056 1244564 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:59:28.675135 1244564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:59:28.675209 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:28.707636 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:28.825182 1244564 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:59:28.833667 1244564 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:59:28.833701 1244564 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:59:28.833712 1244564 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 08:59:28.833765 1244564 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 08:59:28.833847 1244564 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 08:59:28.833949 1244564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:59:28.845824 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:59:28.874731 1244564 start.go:296] duration metric: took 199.670583ms for postStartSetup
	I1123 08:59:28.875147 1244564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-261704
	I1123 08:59:28.898866 1244564 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/config.json ...
	I1123 08:59:28.899764 1244564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:59:28.899885 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:28.923155 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:29.036840 1244564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:59:29.041900 1244564 start.go:128] duration metric: took 11.266959559s to createHost
	I1123 08:59:29.041924 1244564 start.go:83] releasing machines lock for "newest-cni-261704", held for 11.267082271s
	I1123 08:59:29.042000 1244564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-261704
	I1123 08:59:29.058864 1244564 ssh_runner.go:195] Run: cat /version.json
	I1123 08:59:29.058922 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:29.059167 1244564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:59:29.059255 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:29.101630 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:29.109383 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:29.327177 1244564 ssh_runner.go:195] Run: systemctl --version
	I1123 08:59:29.333826 1244564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:59:29.386629 1244564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:59:29.391969 1244564 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:59:29.392114 1244564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:59:29.434140 1244564 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:59:29.434217 1244564 start.go:496] detecting cgroup driver to use...
	I1123 08:59:29.434264 1244564 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:59:29.434350 1244564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:59:29.458827 1244564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:59:29.471807 1244564 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:59:29.471918 1244564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:59:29.494235 1244564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:59:29.522077 1244564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:59:29.683237 1244564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:59:29.862882 1244564 docker.go:234] disabling docker service ...
	I1123 08:59:29.862998 1244564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:59:29.887228 1244564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:59:29.901156 1244564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:59:30.057516 1244564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:59:30.228173 1244564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:59:30.243336 1244564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:59:30.258724 1244564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:59:30.258845 1244564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.268384 1244564 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:59:30.268532 1244564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.276955 1244564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.285928 1244564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.294213 1244564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:59:30.301734 1244564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.309965 1244564 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.322335 1244564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:59:30.330550 1244564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:59:30.339255 1244564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:59:30.346854 1244564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:30.498019 1244564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:59:30.805999 1244564 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:59:30.806142 1244564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:59:30.811021 1244564 start.go:564] Will wait 60s for crictl version
	I1123 08:59:30.811139 1244564 ssh_runner.go:195] Run: which crictl
	I1123 08:59:30.815171 1244564 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:59:30.856840 1244564 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:59:30.856992 1244564 ssh_runner.go:195] Run: crio --version
	I1123 08:59:30.913613 1244564 ssh_runner.go:195] Run: crio --version
	I1123 08:59:30.948934 1244564 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:59:30.951901 1244564 cli_runner.go:164] Run: docker network inspect newest-cni-261704 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:30.968763 1244564 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:59:30.972911 1244564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:30.985213 1244564 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 08:59:29.113413 1240463 addons.go:530] duration metric: took 1.875770735s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 08:59:29.610065 1240463 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-591175" context rescaled to 1 replicas
	W1123 08:59:31.112299 1240463 node_ready.go:57] node "no-preload-591175" has "Ready":"False" status (will retry)
	I1123 08:59:30.988156 1244564 kubeadm.go:884] updating cluster {Name:newest-cni-261704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-261704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:59:30.988320 1244564 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:59:30.988395 1244564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:31.039444 1244564 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:59:31.039467 1244564 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:59:31.039525 1244564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:31.077230 1244564 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:59:31.077257 1244564 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:59:31.077265 1244564 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 08:59:31.077351 1244564 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-261704 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-261704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:59:31.077437 1244564 ssh_runner.go:195] Run: crio config
	I1123 08:59:31.171766 1244564 cni.go:84] Creating CNI manager for ""
	I1123 08:59:31.171787 1244564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:59:31.171802 1244564 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 08:59:31.171825 1244564 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-261704 NodeName:newest-cni-261704 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:59:31.171952 1244564 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-261704"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:59:31.172023 1244564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:59:31.180242 1244564 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:59:31.180313 1244564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:59:31.188269 1244564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 08:59:31.207676 1244564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:59:31.231426 1244564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1123 08:59:31.251774 1244564 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:59:31.256160 1244564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:31.267941 1244564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:31.402813 1244564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:31.418811 1244564 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704 for IP: 192.168.76.2
	I1123 08:59:31.418831 1244564 certs.go:195] generating shared ca certs ...
	I1123 08:59:31.418847 1244564 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:31.418977 1244564 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 08:59:31.419028 1244564 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 08:59:31.419039 1244564 certs.go:257] generating profile certs ...
	I1123 08:59:31.419096 1244564 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/client.key
	I1123 08:59:31.419115 1244564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/client.crt with IP's: []
	I1123 08:59:31.524617 1244564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/client.crt ...
	I1123 08:59:31.524676 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/client.crt: {Name:mk2d18aee4f34c09c800bf35993d941bb666bf5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:31.524935 1244564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/client.key ...
	I1123 08:59:31.524952 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/client.key: {Name:mkb299d5939af82bb93a5f43963524c7ffce0dae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:31.525178 1244564 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.key.059e974a
	I1123 08:59:31.525212 1244564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.crt.059e974a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 08:59:31.980619 1244564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.crt.059e974a ...
	I1123 08:59:31.980650 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.crt.059e974a: {Name:mka6bcec19cace4fe3d2e25c3dbc530242271126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:31.980822 1244564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.key.059e974a ...
	I1123 08:59:31.980836 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.key.059e974a: {Name:mkcc07b3e148167fe5d23f183080a103a6b6316e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:31.980920 1244564 certs.go:382] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.crt.059e974a -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.crt
	I1123 08:59:31.980999 1244564 certs.go:386] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.key.059e974a -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.key
	I1123 08:59:31.981061 1244564 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.key
	I1123 08:59:31.981078 1244564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.crt with IP's: []
	I1123 08:59:32.261670 1244564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.crt ...
	I1123 08:59:32.261700 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.crt: {Name:mk8a6a8c02de1362065c2dad356d7efb7d5cfcc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:32.261887 1244564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.key ...
	I1123 08:59:32.261902 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.key: {Name:mkf9d35cf9ea7535b4a7d7eef85ab018f7c0ee67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:32.262101 1244564 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 08:59:32.262151 1244564 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 08:59:32.262165 1244564 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:59:32.262192 1244564 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:59:32.262219 1244564 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:59:32.262246 1244564 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 08:59:32.262296 1244564 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 08:59:32.262891 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:59:32.281535 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:59:32.299461 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:59:32.317887 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:59:32.339781 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:59:32.368772 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:59:32.393020 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:59:32.410478 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/newest-cni-261704/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:59:32.430366 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:59:32.449946 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	W1123 08:59:33.611163 1240463 node_ready.go:57] node "no-preload-591175" has "Ready":"False" status (will retry)
	W1123 08:59:35.611342 1240463 node_ready.go:57] node "no-preload-591175" has "Ready":"False" status (will retry)
	I1123 08:59:32.470300 1244564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 08:59:32.487977 1244564 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:59:32.500357 1244564 ssh_runner.go:195] Run: openssl version
	I1123 08:59:32.513388 1244564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 08:59:32.523174 1244564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 08:59:32.527075 1244564 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 08:59:32.527216 1244564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 08:59:32.578595 1244564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:59:32.587052 1244564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:59:32.595167 1244564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:32.598614 1244564 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:32.598707 1244564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:32.648282 1244564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:59:32.656664 1244564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 08:59:32.666560 1244564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 08:59:32.670341 1244564 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 08:59:32.670405 1244564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 08:59:32.717692 1244564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 08:59:32.726869 1244564 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:59:32.734801 1244564 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:59:32.734921 1244564 kubeadm.go:401] StartCluster: {Name:newest-cni-261704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-261704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:32.735004 1244564 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:59:32.735062 1244564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:59:32.762871 1244564 cri.go:89] found id: ""
	I1123 08:59:32.762983 1244564 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:59:32.776304 1244564 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:59:32.783889 1244564 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:59:32.784007 1244564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:59:32.791763 1244564 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:59:32.791822 1244564 kubeadm.go:158] found existing configuration files:
	
	I1123 08:59:32.791913 1244564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:59:32.800153 1244564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:59:32.800224 1244564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:59:32.807913 1244564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:59:32.815338 1244564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:59:32.815405 1244564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:59:32.822791 1244564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:59:32.831343 1244564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:59:32.831465 1244564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:59:32.838625 1244564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:59:32.845903 1244564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:59:32.846022 1244564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:59:32.853097 1244564 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:59:32.928941 1244564 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 08:59:32.929236 1244564 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:59:32.999697 1244564 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1123 08:59:37.611835 1240463 node_ready.go:57] node "no-preload-591175" has "Ready":"False" status (will retry)
	W1123 08:59:39.612208 1240463 node_ready.go:57] node "no-preload-591175" has "Ready":"False" status (will retry)
	W1123 08:59:42.112980 1240463 node_ready.go:57] node "no-preload-591175" has "Ready":"False" status (will retry)
	I1123 08:59:42.616871 1240463 node_ready.go:49] node "no-preload-591175" is "Ready"
	I1123 08:59:42.616904 1240463 node_ready.go:38] duration metric: took 13.508818762s for node "no-preload-591175" to be "Ready" ...
	I1123 08:59:42.616917 1240463 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:59:42.616978 1240463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:59:42.636689 1240463 api_server.go:72] duration metric: took 15.399369798s to wait for apiserver process to appear ...
	I1123 08:59:42.636713 1240463 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:59:42.636732 1240463 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:59:42.650607 1240463 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 08:59:42.654990 1240463 api_server.go:141] control plane version: v1.34.1
	I1123 08:59:42.655022 1240463 api_server.go:131] duration metric: took 18.302131ms to wait for apiserver health ...
	I1123 08:59:42.655031 1240463 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:59:42.685942 1240463 system_pods.go:59] 8 kube-system pods found
	I1123 08:59:42.685983 1240463 system_pods.go:61] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Pending
	I1123 08:59:42.685990 1240463 system_pods.go:61] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 08:59:42.685994 1240463 system_pods.go:61] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 08:59:42.685999 1240463 system_pods.go:61] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running
	I1123 08:59:42.686005 1240463 system_pods.go:61] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running
	I1123 08:59:42.686008 1240463 system_pods.go:61] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 08:59:42.686012 1240463 system_pods.go:61] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running
	I1123 08:59:42.686016 1240463 system_pods.go:61] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Pending
	I1123 08:59:42.686023 1240463 system_pods.go:74] duration metric: took 30.985285ms to wait for pod list to return data ...
	I1123 08:59:42.686030 1240463 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:59:42.693937 1240463 default_sa.go:45] found service account: "default"
	I1123 08:59:42.693967 1240463 default_sa.go:55] duration metric: took 7.923099ms for default service account to be created ...
	I1123 08:59:42.693977 1240463 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:59:42.702771 1240463 system_pods.go:86] 8 kube-system pods found
	I1123 08:59:42.702802 1240463 system_pods.go:89] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Pending
	I1123 08:59:42.702818 1240463 system_pods.go:89] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 08:59:42.702823 1240463 system_pods.go:89] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 08:59:42.702828 1240463 system_pods.go:89] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running
	I1123 08:59:42.702832 1240463 system_pods.go:89] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running
	I1123 08:59:42.702836 1240463 system_pods.go:89] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 08:59:42.702841 1240463 system_pods.go:89] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running
	I1123 08:59:42.702855 1240463 system_pods.go:89] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:59:42.702870 1240463 retry.go:31] will retry after 259.422275ms: missing components: kube-dns
	I1123 08:59:42.966917 1240463 system_pods.go:86] 8 kube-system pods found
	I1123 08:59:42.966954 1240463 system_pods.go:89] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:59:42.966961 1240463 system_pods.go:89] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 08:59:42.966978 1240463 system_pods.go:89] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 08:59:42.966984 1240463 system_pods.go:89] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running
	I1123 08:59:42.966989 1240463 system_pods.go:89] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running
	I1123 08:59:42.966993 1240463 system_pods.go:89] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 08:59:42.966997 1240463 system_pods.go:89] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running
	I1123 08:59:42.967007 1240463 system_pods.go:89] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:59:42.967024 1240463 retry.go:31] will retry after 324.257357ms: missing components: kube-dns
	I1123 08:59:43.296386 1240463 system_pods.go:86] 8 kube-system pods found
	I1123 08:59:43.296427 1240463 system_pods.go:89] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:59:43.296434 1240463 system_pods.go:89] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 08:59:43.296440 1240463 system_pods.go:89] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 08:59:43.296444 1240463 system_pods.go:89] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running
	I1123 08:59:43.296449 1240463 system_pods.go:89] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running
	I1123 08:59:43.296453 1240463 system_pods.go:89] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 08:59:43.296457 1240463 system_pods.go:89] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running
	I1123 08:59:43.296463 1240463 system_pods.go:89] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:59:43.296518 1240463 retry.go:31] will retry after 479.546707ms: missing components: kube-dns
	I1123 08:59:43.781237 1240463 system_pods.go:86] 8 kube-system pods found
	I1123 08:59:43.781317 1240463 system_pods.go:89] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Running
	I1123 08:59:43.781347 1240463 system_pods.go:89] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 08:59:43.781365 1240463 system_pods.go:89] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 08:59:43.781392 1240463 system_pods.go:89] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running
	I1123 08:59:43.781422 1240463 system_pods.go:89] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running
	I1123 08:59:43.781439 1240463 system_pods.go:89] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 08:59:43.781459 1240463 system_pods.go:89] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running
	I1123 08:59:43.781495 1240463 system_pods.go:89] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Running
	I1123 08:59:43.781518 1240463 system_pods.go:126] duration metric: took 1.08753334s to wait for k8s-apps to be running ...
	I1123 08:59:43.781539 1240463 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:59:43.781622 1240463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:59:43.800796 1240463 system_svc.go:56] duration metric: took 19.248927ms WaitForService to wait for kubelet
	I1123 08:59:43.800878 1240463 kubeadm.go:587] duration metric: took 16.563561887s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:59:43.800911 1240463 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:59:43.804795 1240463 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:59:43.804876 1240463 node_conditions.go:123] node cpu capacity is 2
	I1123 08:59:43.804913 1240463 node_conditions.go:105] duration metric: took 3.980393ms to run NodePressure ...
	I1123 08:59:43.804951 1240463 start.go:242] waiting for startup goroutines ...
	I1123 08:59:43.804974 1240463 start.go:247] waiting for cluster config update ...
	I1123 08:59:43.804998 1240463 start.go:256] writing updated cluster config ...
	I1123 08:59:43.805361 1240463 ssh_runner.go:195] Run: rm -f paused
	I1123 08:59:43.813163 1240463 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:59:43.820576 1240463 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zwlsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:43.829163 1240463 pod_ready.go:94] pod "coredns-66bc5c9577-zwlsw" is "Ready"
	I1123 08:59:43.829238 1240463 pod_ready.go:86] duration metric: took 8.588254ms for pod "coredns-66bc5c9577-zwlsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:43.832094 1240463 pod_ready.go:83] waiting for pod "etcd-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:43.842097 1240463 pod_ready.go:94] pod "etcd-no-preload-591175" is "Ready"
	I1123 08:59:43.842178 1240463 pod_ready.go:86] duration metric: took 10.013142ms for pod "etcd-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:43.859270 1240463 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:43.864795 1240463 pod_ready.go:94] pod "kube-apiserver-no-preload-591175" is "Ready"
	I1123 08:59:43.864870 1240463 pod_ready.go:86] duration metric: took 5.528698ms for pod "kube-apiserver-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:43.867363 1240463 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:44.217778 1240463 pod_ready.go:94] pod "kube-controller-manager-no-preload-591175" is "Ready"
	I1123 08:59:44.217807 1240463 pod_ready.go:86] duration metric: took 350.386365ms for pod "kube-controller-manager-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:44.418190 1240463 pod_ready.go:83] waiting for pod "kube-proxy-rblwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:44.818151 1240463 pod_ready.go:94] pod "kube-proxy-rblwh" is "Ready"
	I1123 08:59:44.818174 1240463 pod_ready.go:86] duration metric: took 399.961435ms for pod "kube-proxy-rblwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:45.019120 1240463 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:45.418088 1240463 pod_ready.go:94] pod "kube-scheduler-no-preload-591175" is "Ready"
	I1123 08:59:45.418118 1240463 pod_ready.go:86] duration metric: took 398.966468ms for pod "kube-scheduler-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:59:45.418131 1240463 pod_ready.go:40] duration metric: took 1.604889038s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:59:45.519360 1240463 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:59:45.522647 1240463 out.go:179] * Done! kubectl is now configured to use "no-preload-591175" cluster and "default" namespace by default
	I1123 08:59:49.493897 1244564 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:59:49.493970 1244564 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:59:49.494094 1244564 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:59:49.494186 1244564 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:59:49.494237 1244564 kubeadm.go:319] OS: Linux
	I1123 08:59:49.494300 1244564 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:59:49.494355 1244564 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:59:49.494408 1244564 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:59:49.494460 1244564 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:59:49.494519 1244564 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:59:49.494579 1244564 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:59:49.494632 1244564 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:59:49.494705 1244564 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:59:49.494772 1244564 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:59:49.494860 1244564 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:59:49.494985 1244564 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:59:49.495106 1244564 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:59:49.495207 1244564 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:59:49.500179 1244564 out.go:252]   - Generating certificates and keys ...
	I1123 08:59:49.500272 1244564 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:59:49.500345 1244564 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:59:49.500415 1244564 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:59:49.500475 1244564 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:59:49.500539 1244564 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:59:49.500597 1244564 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:59:49.500654 1244564 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:59:49.500777 1244564 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-261704] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:59:49.500833 1244564 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:59:49.500954 1244564 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-261704] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:59:49.501022 1244564 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:59:49.501088 1244564 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:59:49.501134 1244564 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:59:49.501193 1244564 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:59:49.501247 1244564 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:59:49.501307 1244564 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:59:49.501374 1244564 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:59:49.501443 1244564 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:59:49.501502 1244564 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:59:49.501587 1244564 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:59:49.501653 1244564 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:59:49.504568 1244564 out.go:252]   - Booting up control plane ...
	I1123 08:59:49.504672 1244564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:59:49.504753 1244564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:59:49.504820 1244564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:59:49.504921 1244564 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:59:49.505013 1244564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:59:49.505131 1244564 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:59:49.505221 1244564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:59:49.505265 1244564 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:59:49.505393 1244564 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:59:49.505495 1244564 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:59:49.505552 1244564 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501089831s
	I1123 08:59:49.505643 1244564 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:59:49.505722 1244564 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 08:59:49.505810 1244564 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:59:49.505887 1244564 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:59:49.505962 1244564 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.277533567s
	I1123 08:59:49.506029 1244564 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.933753111s
	I1123 08:59:49.506096 1244564 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502512395s
	I1123 08:59:49.506199 1244564 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:59:49.506320 1244564 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:59:49.506389 1244564 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:59:49.506566 1244564 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-261704 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:59:49.506625 1244564 kubeadm.go:319] [bootstrap-token] Using token: oyh4ba.hikr7qjmyumlt8y0
	I1123 08:59:49.511549 1244564 out.go:252]   - Configuring RBAC rules ...
	I1123 08:59:49.511734 1244564 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:59:49.511829 1244564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:59:49.511989 1244564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:59:49.512155 1244564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:59:49.512286 1244564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:59:49.512372 1244564 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:59:49.512485 1244564 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:59:49.512534 1244564 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:59:49.512582 1244564 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:59:49.512589 1244564 kubeadm.go:319] 
	I1123 08:59:49.512646 1244564 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:59:49.512654 1244564 kubeadm.go:319] 
	I1123 08:59:49.512726 1244564 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:59:49.512733 1244564 kubeadm.go:319] 
	I1123 08:59:49.512757 1244564 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:59:49.512816 1244564 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:59:49.512867 1244564 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:59:49.512874 1244564 kubeadm.go:319] 
	I1123 08:59:49.512925 1244564 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:59:49.512933 1244564 kubeadm.go:319] 
	I1123 08:59:49.512978 1244564 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:59:49.512984 1244564 kubeadm.go:319] 
	I1123 08:59:49.513034 1244564 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:59:49.513107 1244564 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:59:49.513174 1244564 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:59:49.513181 1244564 kubeadm.go:319] 
	I1123 08:59:49.513260 1244564 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:59:49.513336 1244564 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:59:49.513343 1244564 kubeadm.go:319] 
	I1123 08:59:49.513423 1244564 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token oyh4ba.hikr7qjmyumlt8y0 \
	I1123 08:59:49.513524 1244564 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb \
	I1123 08:59:49.513546 1244564 kubeadm.go:319] 	--control-plane 
	I1123 08:59:49.513553 1244564 kubeadm.go:319] 
	I1123 08:59:49.513632 1244564 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:59:49.513640 1244564 kubeadm.go:319] 
	I1123 08:59:49.513717 1244564 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token oyh4ba.hikr7qjmyumlt8y0 \
	I1123 08:59:49.513829 1244564 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb 
	I1123 08:59:49.513841 1244564 cni.go:84] Creating CNI manager for ""
	I1123 08:59:49.513848 1244564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:59:49.517044 1244564 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:59:49.519970 1244564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:59:49.523843 1244564 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:59:49.523863 1244564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:59:49.536157 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:59:49.827486 1244564 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:59:49.827634 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:49.827716 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-261704 minikube.k8s.io/updated_at=2025_11_23T08_59_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=newest-cni-261704 minikube.k8s.io/primary=true
	I1123 08:59:49.993458 1244564 ops.go:34] apiserver oom_adj: -16
	I1123 08:59:49.993571 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:50.494170 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:50.994524 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:51.493860 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:51.994019 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:52.494107 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:52.994376 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:53.494269 1244564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:59:53.585934 1244564 kubeadm.go:1114] duration metric: took 3.758351017s to wait for elevateKubeSystemPrivileges
	I1123 08:59:53.585975 1244564 kubeadm.go:403] duration metric: took 20.851060264s to StartCluster
	I1123 08:59:53.585993 1244564 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:53.586055 1244564 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:59:53.587001 1244564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:53.587239 1244564 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:59:53.587322 1244564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:59:53.587549 1244564 config.go:182] Loaded profile config "newest-cni-261704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:59:53.587579 1244564 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:59:53.587634 1244564 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-261704"
	I1123 08:59:53.587650 1244564 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-261704"
	I1123 08:59:53.587669 1244564 host.go:66] Checking if "newest-cni-261704" exists ...
	I1123 08:59:53.587689 1244564 addons.go:70] Setting default-storageclass=true in profile "newest-cni-261704"
	I1123 08:59:53.587707 1244564 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-261704"
	I1123 08:59:53.588022 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Status}}
	I1123 08:59:53.588349 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Status}}
	I1123 08:59:53.590612 1244564 out.go:179] * Verifying Kubernetes components...
	I1123 08:59:53.599307 1244564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:53.627428 1244564 addons.go:239] Setting addon default-storageclass=true in "newest-cni-261704"
	I1123 08:59:53.627463 1244564 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:59:53.627465 1244564 host.go:66] Checking if "newest-cni-261704" exists ...
	I1123 08:59:53.627928 1244564 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Status}}
	I1123 08:59:53.631950 1244564 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:59:53.631975 1244564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:59:53.632038 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:53.651369 1244564 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:59:53.651392 1244564 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:59:53.651461 1244564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 08:59:53.680486 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:53.695602 1244564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34547 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 08:59:53.974048 1244564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:59:53.974218 1244564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:53.977204 1244564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:59:54.028136 1244564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:59:54.664510 1244564 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 08:59:54.665394 1244564 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:59:54.665566 1244564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:59:54.943839 1244564 api_server.go:72] duration metric: took 1.356569742s to wait for apiserver process to appear ...
	I1123 08:59:54.943866 1244564 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:59:54.943884 1244564 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:59:54.946709 1244564 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 08:59:54.950456 1244564 addons.go:530] duration metric: took 1.362867394s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 08:59:54.960176 1244564 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 08:59:54.961114 1244564 api_server.go:141] control plane version: v1.34.1
	I1123 08:59:54.961134 1244564 api_server.go:131] duration metric: took 17.26193ms to wait for apiserver health ...
	I1123 08:59:54.961143 1244564 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:59:54.964349 1244564 system_pods.go:59] 8 kube-system pods found
	I1123 08:59:54.964387 1244564 system_pods.go:61] "coredns-66bc5c9577-mdvx8" [aae4ba97-00dc-4620-818d-e571ed2a5b99] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:59:54.964398 1244564 system_pods.go:61] "etcd-newest-cni-261704" [ceed2430-2405-415c-9d8a-cbb9fec62bb3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:59:54.964407 1244564 system_pods.go:61] "kindnet-k7fsm" [7c5f3452-ed50-4a8d-82e3-51abceb3b21b] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:59:54.964415 1244564 system_pods.go:61] "kube-apiserver-newest-cni-261704" [b69d74bd-25b5-478e-a10e-e2c0b67c51d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:59:54.964428 1244564 system_pods.go:61] "kube-controller-manager-newest-cni-261704" [6b736ad3-cf70-428d-aabf-8635b1b3fabd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:59:54.964435 1244564 system_pods.go:61] "kube-proxy-wp8vw" [36630050-6d8d-433a-a3bc-77fc44b8484e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:59:54.964441 1244564 system_pods.go:61] "kube-scheduler-newest-cni-261704" [c824e0c6-1c1a-48a1-b05a-114c05052710] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:59:54.964450 1244564 system_pods.go:61] "storage-provisioner" [2afa132f-b478-4d70-9125-e632f2084e4e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:59:54.964456 1244564 system_pods.go:74] duration metric: took 3.30755ms to wait for pod list to return data ...
	I1123 08:59:54.964470 1244564 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:59:54.966620 1244564 default_sa.go:45] found service account: "default"
	I1123 08:59:54.966638 1244564 default_sa.go:55] duration metric: took 2.161908ms for default service account to be created ...
	I1123 08:59:54.966649 1244564 kubeadm.go:587] duration metric: took 1.379384754s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:59:54.966667 1244564 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:59:54.969127 1244564 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:59:54.969155 1244564 node_conditions.go:123] node cpu capacity is 2
	I1123 08:59:54.969168 1244564 node_conditions.go:105] duration metric: took 2.496627ms to run NodePressure ...
	I1123 08:59:54.969181 1244564 start.go:242] waiting for startup goroutines ...
	I1123 08:59:55.169789 1244564 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-261704" context rescaled to 1 replicas
	I1123 08:59:55.169830 1244564 start.go:247] waiting for cluster config update ...
	I1123 08:59:55.169867 1244564 start.go:256] writing updated cluster config ...
	I1123 08:59:55.170191 1244564 ssh_runner.go:195] Run: rm -f paused
	I1123 08:59:55.227798 1244564 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 08:59:55.233185 1244564 out.go:179] * Done! kubectl is now configured to use "newest-cni-261704" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:59:43 no-preload-591175 crio[837]: time="2025-11-23T08:59:43.118902071Z" level=info msg="Created container 7f56ecf10ed2f8afaff1088a6a49d7f654cddd128ab9b9949597a67c92387055: kube-system/coredns-66bc5c9577-zwlsw/coredns" id=d3d48ec7-bc28-4ff5-9098-ba900ea4df16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:59:43 no-preload-591175 crio[837]: time="2025-11-23T08:59:43.121961045Z" level=info msg="Starting container: 7f56ecf10ed2f8afaff1088a6a49d7f654cddd128ab9b9949597a67c92387055" id=608450f2-98c8-446d-a6e4-171444bbf38d name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:59:43 no-preload-591175 crio[837]: time="2025-11-23T08:59:43.132483046Z" level=info msg="Started container" PID=2493 containerID=7f56ecf10ed2f8afaff1088a6a49d7f654cddd128ab9b9949597a67c92387055 description=kube-system/coredns-66bc5c9577-zwlsw/coredns id=608450f2-98c8-446d-a6e4-171444bbf38d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8316da45870ca9acabcae87f4f6437715d395abbfe7accf523bf09470845180d
	Nov 23 08:59:46 no-preload-591175 crio[837]: time="2025-11-23T08:59:46.163150779Z" level=info msg="Running pod sandbox: default/busybox/POD" id=027c6b43-024c-4061-9d4a-02694ccb75cc name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:59:46 no-preload-591175 crio[837]: time="2025-11-23T08:59:46.163260766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:59:46 no-preload-591175 crio[837]: time="2025-11-23T08:59:46.169930732Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b2d3ae330c7bf32800697b29f8a1f2cbb5a240e360e954de85d7d4054aa8d969 UID:955d780f-d21f-4c17-a520-a1df10d9609a NetNS:/var/run/netns/290299cb-74e2-474d-98f8-91d543bfc618 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d3c0}] Aliases:map[]}"
	Nov 23 08:59:46 no-preload-591175 crio[837]: time="2025-11-23T08:59:46.170091901Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 08:59:46 no-preload-591175 crio[837]: time="2025-11-23T08:59:46.190238426Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b2d3ae330c7bf32800697b29f8a1f2cbb5a240e360e954de85d7d4054aa8d969 UID:955d780f-d21f-4c17-a520-a1df10d9609a NetNS:/var/run/netns/290299cb-74e2-474d-98f8-91d543bfc618 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d3c0}] Aliases:map[]}"
	Nov 23 08:59:46 no-preload-591175 crio[837]: time="2025-11-23T08:59:46.19039633Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 08:59:46 no-preload-591175 crio[837]: time="2025-11-23T08:59:46.201789617Z" level=info msg="Ran pod sandbox b2d3ae330c7bf32800697b29f8a1f2cbb5a240e360e954de85d7d4054aa8d969 with infra container: default/busybox/POD" id=027c6b43-024c-4061-9d4a-02694ccb75cc name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:59:46 no-preload-591175 crio[837]: time="2025-11-23T08:59:46.202731473Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3ed2d714-a6f9-4b8c-9250-9b4e06b013e8 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:59:46 no-preload-591175 crio[837]: time="2025-11-23T08:59:46.202849443Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3ed2d714-a6f9-4b8c-9250-9b4e06b013e8 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:59:46 no-preload-591175 crio[837]: time="2025-11-23T08:59:46.202891502Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3ed2d714-a6f9-4b8c-9250-9b4e06b013e8 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:59:46 no-preload-591175 crio[837]: time="2025-11-23T08:59:46.204320171Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=89b5a864-eb95-440e-96a2-ad4d4a9eb50f name=/runtime.v1.ImageService/PullImage
	Nov 23 08:59:46 no-preload-591175 crio[837]: time="2025-11-23T08:59:46.205557782Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:59:48 no-preload-591175 crio[837]: time="2025-11-23T08:59:48.340431201Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=89b5a864-eb95-440e-96a2-ad4d4a9eb50f name=/runtime.v1.ImageService/PullImage
	Nov 23 08:59:48 no-preload-591175 crio[837]: time="2025-11-23T08:59:48.341506214Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=64bc6273-50a4-453e-9b25-8fb1798a9ee6 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:59:48 no-preload-591175 crio[837]: time="2025-11-23T08:59:48.343077165Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a4539914-3fe7-4db6-80b8-caebfb35d529 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:59:48 no-preload-591175 crio[837]: time="2025-11-23T08:59:48.348603836Z" level=info msg="Creating container: default/busybox/busybox" id=6432d1c9-6177-419b-a7dd-6de1df32ca2b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:59:48 no-preload-591175 crio[837]: time="2025-11-23T08:59:48.348719016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:59:48 no-preload-591175 crio[837]: time="2025-11-23T08:59:48.353480613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:59:48 no-preload-591175 crio[837]: time="2025-11-23T08:59:48.35406166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:59:48 no-preload-591175 crio[837]: time="2025-11-23T08:59:48.370611864Z" level=info msg="Created container 99458d8916b0702c572918513c8ca7092e0bc79058d8ad19de32f99559976164: default/busybox/busybox" id=6432d1c9-6177-419b-a7dd-6de1df32ca2b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:59:48 no-preload-591175 crio[837]: time="2025-11-23T08:59:48.374557107Z" level=info msg="Starting container: 99458d8916b0702c572918513c8ca7092e0bc79058d8ad19de32f99559976164" id=d3f6a86b-50b7-47c6-8eac-7bd4b7213c46 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:59:48 no-preload-591175 crio[837]: time="2025-11-23T08:59:48.3782862Z" level=info msg="Started container" PID=2553 containerID=99458d8916b0702c572918513c8ca7092e0bc79058d8ad19de32f99559976164 description=default/busybox/busybox id=d3f6a86b-50b7-47c6-8eac-7bd4b7213c46 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b2d3ae330c7bf32800697b29f8a1f2cbb5a240e360e954de85d7d4054aa8d969
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	99458d8916b07       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   b2d3ae330c7bf       busybox                                     default
	7f56ecf10ed2f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   8316da45870ca       coredns-66bc5c9577-zwlsw                    kube-system
	f9cf7c1f82550       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   ea7dcd286f1af       storage-provisioner                         kube-system
	c41c189ae055e       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   f37e9b5f74c83       kindnet-v65j2                               kube-system
	212bbf41fdb3d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      29 seconds ago      Running             kube-proxy                0                   076a2d6ba7019       kube-proxy-rblwh                            kube-system
	47c38c9ef89ba       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      43 seconds ago      Running             kube-controller-manager   0                   371b63c7f8189       kube-controller-manager-no-preload-591175   kube-system
	4f9292e2a088f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      43 seconds ago      Running             etcd                      0                   0d253356ddbfd       etcd-no-preload-591175                      kube-system
	57bfcbeb6cdcd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      43 seconds ago      Running             kube-apiserver            0                   1b8c2d02a72be       kube-apiserver-no-preload-591175            kube-system
	8358dfb03d1ca       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      43 seconds ago      Running             kube-scheduler            0                   57251c63d2aff       kube-scheduler-no-preload-591175            kube-system
	
	
	==> coredns [7f56ecf10ed2f8afaff1088a6a49d7f654cddd128ab9b9949597a67c92387055] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41028 - 54629 "HINFO IN 7664180709661486679.8758078928546140347. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036545449s
	
	
	==> describe nodes <==
	Name:               no-preload-591175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-591175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=no-preload-591175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_59_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:59:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-591175
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:59:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:59:53 +0000   Sun, 23 Nov 2025 08:59:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:59:53 +0000   Sun, 23 Nov 2025 08:59:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:59:53 +0000   Sun, 23 Nov 2025 08:59:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:59:53 +0000   Sun, 23 Nov 2025 08:59:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-591175
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                f436885f-b4ec-44fe-a494-6bb1784496fe
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-zwlsw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-591175                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-v65j2                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-591175             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-591175    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-rblwh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-591175             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   Starting                 44s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node no-preload-591175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node no-preload-591175 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x6 over 44s)  kubelet          Node no-preload-591175 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-591175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-591175 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-591175 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-591175 event: Registered Node no-preload-591175 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-591175 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 08:37] overlayfs: idmapped layers are currently not supported
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	[Nov23 08:55] overlayfs: idmapped layers are currently not supported
	[Nov23 08:56] overlayfs: idmapped layers are currently not supported
	[Nov23 08:57] overlayfs: idmapped layers are currently not supported
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	[ +37.440319] overlayfs: idmapped layers are currently not supported
	[Nov23 08:59] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4f9292e2a088f733105cea6e590bd0615e9f207141aff4de5f278e110219d909] <==
	{"level":"warn","ts":"2025-11-23T08:59:17.493664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.523749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.543751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.577824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.602583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.664635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.689869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.724808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.755661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.791371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.839260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.872874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.943532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.952856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.968240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:17.997601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:18.017050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:18.047663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:18.062417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:18.078378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:18.105080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:18.124623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:18.148788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:18.166479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:59:18.290940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:59:58 up  9:42,  0 user,  load average: 3.76, 3.36, 2.81
	Linux no-preload-591175 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c41c189ae055e52703292cf45f7fb71e76f16c11062d493bec0b9e82a99ec339] <==
	I1123 08:59:32.328806       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:59:32.329194       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:59:32.329361       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:59:32.329401       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:59:32.329437       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:59:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:59:32.529127       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:59:32.529155       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:59:32.529163       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:59:32.529843       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:59:32.729792       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:59:32.729880       1 metrics.go:72] Registering metrics
	I1123 08:59:32.729960       1 controller.go:711] "Syncing nftables rules"
	I1123 08:59:42.535229       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:59:42.535331       1 main.go:301] handling current node
	I1123 08:59:52.529741       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:59:52.529790       1 main.go:301] handling current node
	
	
	==> kube-apiserver [57bfcbeb6cdcd4aa2f379f20048424fb0b7e9278ee9d150c15074b8610efb06b] <==
	I1123 08:59:19.391750       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 08:59:19.421497       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:59:19.422740       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:59:19.426743       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:59:19.439174       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:59:19.459007       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:59:19.461341       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:59:20.088651       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:59:20.099332       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:59:20.099458       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:59:21.007715       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:59:21.104153       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:59:21.204396       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:59:21.217955       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:59:21.219225       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:59:21.225298       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:59:21.259134       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:59:22.215739       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:59:22.300802       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:59:22.344349       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:59:27.111151       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:59:27.117597       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:59:27.290827       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:59:27.372482       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 08:59:55.993090       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:50712: use of closed network connection
	
	
	==> kube-controller-manager [47c38c9ef89ba7acb309a099bf3720724cbb0d68ebe3cdb9be97b3f9e374addd] <==
	I1123 08:59:26.302835       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:59:26.304266       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:59:26.305583       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:59:26.305654       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:59:26.305716       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:59:26.307144       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:59:26.308345       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:59:26.308425       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:59:26.309722       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:59:26.310338       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:59:26.311525       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:59:26.316267       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:59:26.316359       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:59:26.316386       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:59:26.317602       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:59:26.317656       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:59:26.319877       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:59:26.319971       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:59:26.319996       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:59:26.320002       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:59:26.320007       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:59:26.322096       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:59:26.325340       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:59:26.328841       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-591175" podCIDRs=["10.244.0.0/24"]
	I1123 08:59:46.262886       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [212bbf41fdb3d1b5aebf71719c58980dfd0593490e5a75b927171b7c366e311b] <==
	I1123 08:59:28.159807       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:59:28.256832       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:59:28.356940       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:59:28.357014       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:59:28.357110       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:59:28.401707       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:59:28.401776       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:59:28.409277       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:59:28.410161       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:59:28.410176       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:59:28.424962       1 config.go:200] "Starting service config controller"
	I1123 08:59:28.424987       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:59:28.425001       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:59:28.425005       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:59:28.425024       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:59:28.425029       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:59:28.425653       1 config.go:309] "Starting node config controller"
	I1123 08:59:28.425666       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:59:28.425672       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:59:28.525338       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:59:28.525382       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:59:28.525396       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8358dfb03d1ca26c2dfee718c5724c356bcf227d211795a94cb054ca1dca8083] <==
	E1123 08:59:19.373831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:59:19.373997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:59:19.374100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:59:19.374216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 08:59:19.390076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:59:19.412265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:59:19.412454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:59:19.412583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:59:19.412737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:59:19.412853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:59:19.412967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:59:19.413473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:59:19.415584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:59:19.415666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:59:20.283447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:59:20.299046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:59:20.303060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:59:20.397641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:59:20.461944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:59:20.515195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:59:20.594338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:59:20.594412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:59:20.605965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:59:20.930461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 08:59:23.664059       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:59:26 no-preload-591175 kubelet[2019]: I1123 08:59:26.386371    2019 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:59:26 no-preload-591175 kubelet[2019]: I1123 08:59:26.386983    2019 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:59:27 no-preload-591175 kubelet[2019]: I1123 08:59:27.672198    2019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8c4a2941-2f19-43ba-8f9a-7a48072b1223-kube-proxy\") pod \"kube-proxy-rblwh\" (UID: \"8c4a2941-2f19-43ba-8f9a-7a48072b1223\") " pod="kube-system/kube-proxy-rblwh"
	Nov 23 08:59:27 no-preload-591175 kubelet[2019]: I1123 08:59:27.672248    2019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c422d680-2063-435a-8b26-e265e3554728-cni-cfg\") pod \"kindnet-v65j2\" (UID: \"c422d680-2063-435a-8b26-e265e3554728\") " pod="kube-system/kindnet-v65j2"
	Nov 23 08:59:27 no-preload-591175 kubelet[2019]: I1123 08:59:27.672271    2019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c4a2941-2f19-43ba-8f9a-7a48072b1223-xtables-lock\") pod \"kube-proxy-rblwh\" (UID: \"8c4a2941-2f19-43ba-8f9a-7a48072b1223\") " pod="kube-system/kube-proxy-rblwh"
	Nov 23 08:59:27 no-preload-591175 kubelet[2019]: I1123 08:59:27.672292    2019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c4a2941-2f19-43ba-8f9a-7a48072b1223-lib-modules\") pod \"kube-proxy-rblwh\" (UID: \"8c4a2941-2f19-43ba-8f9a-7a48072b1223\") " pod="kube-system/kube-proxy-rblwh"
	Nov 23 08:59:27 no-preload-591175 kubelet[2019]: I1123 08:59:27.672309    2019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c422d680-2063-435a-8b26-e265e3554728-xtables-lock\") pod \"kindnet-v65j2\" (UID: \"c422d680-2063-435a-8b26-e265e3554728\") " pod="kube-system/kindnet-v65j2"
	Nov 23 08:59:27 no-preload-591175 kubelet[2019]: I1123 08:59:27.678228    2019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-948bn\" (UniqueName: \"kubernetes.io/projected/c422d680-2063-435a-8b26-e265e3554728-kube-api-access-948bn\") pod \"kindnet-v65j2\" (UID: \"c422d680-2063-435a-8b26-e265e3554728\") " pod="kube-system/kindnet-v65j2"
	Nov 23 08:59:27 no-preload-591175 kubelet[2019]: I1123 08:59:27.678280    2019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwjww\" (UniqueName: \"kubernetes.io/projected/8c4a2941-2f19-43ba-8f9a-7a48072b1223-kube-api-access-xwjww\") pod \"kube-proxy-rblwh\" (UID: \"8c4a2941-2f19-43ba-8f9a-7a48072b1223\") " pod="kube-system/kube-proxy-rblwh"
	Nov 23 08:59:27 no-preload-591175 kubelet[2019]: I1123 08:59:27.678302    2019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c422d680-2063-435a-8b26-e265e3554728-lib-modules\") pod \"kindnet-v65j2\" (UID: \"c422d680-2063-435a-8b26-e265e3554728\") " pod="kube-system/kindnet-v65j2"
	Nov 23 08:59:27 no-preload-591175 kubelet[2019]: I1123 08:59:27.907426    2019 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 08:59:28 no-preload-591175 kubelet[2019]: W1123 08:59:28.313568    2019 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/crio-f37e9b5f74c83d9974b8a23721bdba975c27c8b4c4dbf8b7c841e7b1ef2573ae WatchSource:0}: Error finding container f37e9b5f74c83d9974b8a23721bdba975c27c8b4c4dbf8b7c841e7b1ef2573ae: Status 404 returned error can't find the container with id f37e9b5f74c83d9974b8a23721bdba975c27c8b4c4dbf8b7c841e7b1ef2573ae
	Nov 23 08:59:30 no-preload-591175 kubelet[2019]: I1123 08:59:30.647474    2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rblwh" podStartSLOduration=3.647457827 podStartE2EDuration="3.647457827s" podCreationTimestamp="2025-11-23 08:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:59:28.353969145 +0000 UTC m=+6.284996690" watchObservedRunningTime="2025-11-23 08:59:30.647457827 +0000 UTC m=+8.578485396"
	Nov 23 08:59:32 no-preload-591175 kubelet[2019]: I1123 08:59:32.531721    2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-v65j2" podStartSLOduration=1.718006422 podStartE2EDuration="5.531694215s" podCreationTimestamp="2025-11-23 08:59:27 +0000 UTC" firstStartedPulling="2025-11-23 08:59:28.316499717 +0000 UTC m=+6.247527262" lastFinishedPulling="2025-11-23 08:59:32.130187502 +0000 UTC m=+10.061215055" observedRunningTime="2025-11-23 08:59:32.379573943 +0000 UTC m=+10.310601488" watchObservedRunningTime="2025-11-23 08:59:32.531694215 +0000 UTC m=+10.462721760"
	Nov 23 08:59:42 no-preload-591175 kubelet[2019]: I1123 08:59:42.596118    2019 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:59:42 no-preload-591175 kubelet[2019]: I1123 08:59:42.723132    2019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/923af3fc-5d78-45d7-ad14-fd020a72b76d-tmp\") pod \"storage-provisioner\" (UID: \"923af3fc-5d78-45d7-ad14-fd020a72b76d\") " pod="kube-system/storage-provisioner"
	Nov 23 08:59:42 no-preload-591175 kubelet[2019]: I1123 08:59:42.723396    2019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjn2s\" (UniqueName: \"kubernetes.io/projected/923af3fc-5d78-45d7-ad14-fd020a72b76d-kube-api-access-mjn2s\") pod \"storage-provisioner\" (UID: \"923af3fc-5d78-45d7-ad14-fd020a72b76d\") " pod="kube-system/storage-provisioner"
	Nov 23 08:59:42 no-preload-591175 kubelet[2019]: I1123 08:59:42.824419    2019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4kjr\" (UniqueName: \"kubernetes.io/projected/4493cf17-56c7-4aec-aff9-f1b7a47398ea-kube-api-access-h4kjr\") pod \"coredns-66bc5c9577-zwlsw\" (UID: \"4493cf17-56c7-4aec-aff9-f1b7a47398ea\") " pod="kube-system/coredns-66bc5c9577-zwlsw"
	Nov 23 08:59:42 no-preload-591175 kubelet[2019]: I1123 08:59:42.824670    2019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4493cf17-56c7-4aec-aff9-f1b7a47398ea-config-volume\") pod \"coredns-66bc5c9577-zwlsw\" (UID: \"4493cf17-56c7-4aec-aff9-f1b7a47398ea\") " pod="kube-system/coredns-66bc5c9577-zwlsw"
	Nov 23 08:59:42 no-preload-591175 kubelet[2019]: W1123 08:59:42.997787    2019 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/crio-ea7dcd286f1af870e870a515b743b49ef126d9b78c8e005b1de0e310d2f05c4c WatchSource:0}: Error finding container ea7dcd286f1af870e870a515b743b49ef126d9b78c8e005b1de0e310d2f05c4c: Status 404 returned error can't find the container with id ea7dcd286f1af870e870a515b743b49ef126d9b78c8e005b1de0e310d2f05c4c
	Nov 23 08:59:43 no-preload-591175 kubelet[2019]: W1123 08:59:43.041522    2019 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/crio-8316da45870ca9acabcae87f4f6437715d395abbfe7accf523bf09470845180d WatchSource:0}: Error finding container 8316da45870ca9acabcae87f4f6437715d395abbfe7accf523bf09470845180d: Status 404 returned error can't find the container with id 8316da45870ca9acabcae87f4f6437715d395abbfe7accf523bf09470845180d
	Nov 23 08:59:43 no-preload-591175 kubelet[2019]: I1123 08:59:43.411140    2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zwlsw" podStartSLOduration=16.411123083 podStartE2EDuration="16.411123083s" podCreationTimestamp="2025-11-23 08:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:59:43.38610906 +0000 UTC m=+21.317136604" watchObservedRunningTime="2025-11-23 08:59:43.411123083 +0000 UTC m=+21.342150628"
	Nov 23 08:59:43 no-preload-591175 kubelet[2019]: I1123 08:59:43.438188    2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.438169091 podStartE2EDuration="14.438169091s" podCreationTimestamp="2025-11-23 08:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:59:43.413318886 +0000 UTC m=+21.344346447" watchObservedRunningTime="2025-11-23 08:59:43.438169091 +0000 UTC m=+21.369196636"
	Nov 23 08:59:45 no-preload-591175 kubelet[2019]: I1123 08:59:45.947299    2019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98dzp\" (UniqueName: \"kubernetes.io/projected/955d780f-d21f-4c17-a520-a1df10d9609a-kube-api-access-98dzp\") pod \"busybox\" (UID: \"955d780f-d21f-4c17-a520-a1df10d9609a\") " pod="default/busybox"
	Nov 23 08:59:46 no-preload-591175 kubelet[2019]: W1123 08:59:46.200635    2019 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/crio-b2d3ae330c7bf32800697b29f8a1f2cbb5a240e360e954de85d7d4054aa8d969 WatchSource:0}: Error finding container b2d3ae330c7bf32800697b29f8a1f2cbb5a240e360e954de85d7d4054aa8d969: Status 404 returned error can't find the container with id b2d3ae330c7bf32800697b29f8a1f2cbb5a240e360e954de85d7d4054aa8d969
	
	
	==> storage-provisioner [f9cf7c1f8255071e0c19c019f7238774425b3cd679bc79c5604dbe909d5f48e0] <==
	I1123 08:59:43.150749       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:59:43.189285       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:59:43.189408       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:59:43.200613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:43.241566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:59:43.245342       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:59:43.245643       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-591175_d2e44cf5-b7d1-4597-93e5-3f06608c3ad7!
	I1123 08:59:43.255932       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"600e7609-78b8-477b-9429-5d86b624370f", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-591175_d2e44cf5-b7d1-4597-93e5-3f06608c3ad7 became leader
	W1123 08:59:43.261588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:43.272512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:59:43.346590       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-591175_d2e44cf5-b7d1-4597-93e5-3f06608c3ad7!
	W1123 08:59:45.277093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:45.292401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:47.296225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:47.301101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:49.304304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:49.308716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:51.311881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:51.316124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:53.320513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:53.338110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:55.341152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:55.348530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:57.352570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:59:57.362020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-591175 -n no-preload-591175
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-591175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-261704 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-261704 --alsologtostderr -v=1: exit status 80 (2.637572346s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-261704 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:00:16.054741 1251278 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:16.054862 1251278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:16.054872 1251278 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:16.054878 1251278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:16.055269 1251278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 09:00:16.055572 1251278 out.go:368] Setting JSON to false
	I1123 09:00:16.055599 1251278 mustload.go:66] Loading cluster: newest-cni-261704
	I1123 09:00:16.056290 1251278 config.go:182] Loaded profile config "newest-cni-261704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:16.056946 1251278 cli_runner.go:164] Run: docker container inspect newest-cni-261704 --format={{.State.Status}}
	I1123 09:00:16.081117 1251278 host.go:66] Checking if "newest-cni-261704" exists ...
	I1123 09:00:16.081444 1251278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:00:16.199584 1251278 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-23 09:00:16.188250399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:00:16.200243 1251278 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-261704 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 09:00:16.203801 1251278 out.go:179] * Pausing node newest-cni-261704 ... 
	I1123 09:00:16.206323 1251278 host.go:66] Checking if "newest-cni-261704" exists ...
	I1123 09:00:16.206663 1251278 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:16.206704 1251278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-261704
	I1123 09:00:16.233865 1251278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34552 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/newest-cni-261704/id_rsa Username:docker}
	I1123 09:00:16.338619 1251278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:00:16.361244 1251278 pause.go:52] kubelet running: true
	I1123 09:00:16.361328 1251278 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:00:16.670521 1251278 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:00:16.670612 1251278 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:00:16.802053 1251278 cri.go:89] found id: "5bc07aa622b4f2cb6c6e329917932f22bf4e7146084f2b0def422d1c54e83a0c"
	I1123 09:00:16.802075 1251278 cri.go:89] found id: "9624a44b6ce5cc2c014b290df02810ce1759d2c0a54189542ed9456c4eb24a2a"
	I1123 09:00:16.802080 1251278 cri.go:89] found id: "3af4aa3938c00ead58f3f54203d3fd9e33f73b03851da51e786f34d10ff67ee9"
	I1123 09:00:16.802084 1251278 cri.go:89] found id: "880d4ef1f66f609d81a2b6e4dbdc4df6a03f35b8f9d778a7ed11c71849e44600"
	I1123 09:00:16.802087 1251278 cri.go:89] found id: "6f050244cada0b5b8f4416fbf67c393c75b4f649d2e93d425648834a3cac6d0f"
	I1123 09:00:16.802091 1251278 cri.go:89] found id: "f202c3fe478cd3e553a0db890621640c851e159694e8421e346372dfd05c53b6"
	I1123 09:00:16.802093 1251278 cri.go:89] found id: ""
	I1123 09:00:16.802144 1251278 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:16.828117 1251278 retry.go:31] will retry after 147.966298ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:16Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:00:16.976396 1251278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:00:16.990046 1251278 pause.go:52] kubelet running: false
	I1123 09:00:16.990125 1251278 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:00:17.172692 1251278 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:00:17.172777 1251278 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:00:17.261809 1251278 cri.go:89] found id: "5bc07aa622b4f2cb6c6e329917932f22bf4e7146084f2b0def422d1c54e83a0c"
	I1123 09:00:17.261831 1251278 cri.go:89] found id: "9624a44b6ce5cc2c014b290df02810ce1759d2c0a54189542ed9456c4eb24a2a"
	I1123 09:00:17.261847 1251278 cri.go:89] found id: "3af4aa3938c00ead58f3f54203d3fd9e33f73b03851da51e786f34d10ff67ee9"
	I1123 09:00:17.261851 1251278 cri.go:89] found id: "880d4ef1f66f609d81a2b6e4dbdc4df6a03f35b8f9d778a7ed11c71849e44600"
	I1123 09:00:17.261855 1251278 cri.go:89] found id: "6f050244cada0b5b8f4416fbf67c393c75b4f649d2e93d425648834a3cac6d0f"
	I1123 09:00:17.261859 1251278 cri.go:89] found id: "f202c3fe478cd3e553a0db890621640c851e159694e8421e346372dfd05c53b6"
	I1123 09:00:17.261862 1251278 cri.go:89] found id: ""
	I1123 09:00:17.261910 1251278 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:17.273612 1251278 retry.go:31] will retry after 337.109571ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:17Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:00:17.610938 1251278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:00:17.625077 1251278 pause.go:52] kubelet running: false
	I1123 09:00:17.625142 1251278 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:00:17.808779 1251278 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:00:17.808869 1251278 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:00:17.914897 1251278 cri.go:89] found id: "5bc07aa622b4f2cb6c6e329917932f22bf4e7146084f2b0def422d1c54e83a0c"
	I1123 09:00:17.914919 1251278 cri.go:89] found id: "9624a44b6ce5cc2c014b290df02810ce1759d2c0a54189542ed9456c4eb24a2a"
	I1123 09:00:17.914924 1251278 cri.go:89] found id: "3af4aa3938c00ead58f3f54203d3fd9e33f73b03851da51e786f34d10ff67ee9"
	I1123 09:00:17.914928 1251278 cri.go:89] found id: "880d4ef1f66f609d81a2b6e4dbdc4df6a03f35b8f9d778a7ed11c71849e44600"
	I1123 09:00:17.914932 1251278 cri.go:89] found id: "6f050244cada0b5b8f4416fbf67c393c75b4f649d2e93d425648834a3cac6d0f"
	I1123 09:00:17.914935 1251278 cri.go:89] found id: "f202c3fe478cd3e553a0db890621640c851e159694e8421e346372dfd05c53b6"
	I1123 09:00:17.914938 1251278 cri.go:89] found id: ""
	I1123 09:00:17.914986 1251278 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:17.931560 1251278 retry.go:31] will retry after 324.486954ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:17Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:00:18.257108 1251278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:00:18.272138 1251278 pause.go:52] kubelet running: false
	I1123 09:00:18.272205 1251278 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:00:18.468040 1251278 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:00:18.468128 1251278 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:00:18.558901 1251278 cri.go:89] found id: "5bc07aa622b4f2cb6c6e329917932f22bf4e7146084f2b0def422d1c54e83a0c"
	I1123 09:00:18.558922 1251278 cri.go:89] found id: "9624a44b6ce5cc2c014b290df02810ce1759d2c0a54189542ed9456c4eb24a2a"
	I1123 09:00:18.558927 1251278 cri.go:89] found id: "3af4aa3938c00ead58f3f54203d3fd9e33f73b03851da51e786f34d10ff67ee9"
	I1123 09:00:18.558931 1251278 cri.go:89] found id: "880d4ef1f66f609d81a2b6e4dbdc4df6a03f35b8f9d778a7ed11c71849e44600"
	I1123 09:00:18.558935 1251278 cri.go:89] found id: "6f050244cada0b5b8f4416fbf67c393c75b4f649d2e93d425648834a3cac6d0f"
	I1123 09:00:18.558945 1251278 cri.go:89] found id: "f202c3fe478cd3e553a0db890621640c851e159694e8421e346372dfd05c53b6"
	I1123 09:00:18.558949 1251278 cri.go:89] found id: ""
	I1123 09:00:18.558998 1251278 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:18.575556 1251278 out.go:203] 
	W1123 09:00:18.578518 1251278 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:00:18.578538 1251278 out.go:285] * 
	* 
	W1123 09:00:18.590196 1251278 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:00:18.596150 1251278 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-261704 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-261704
helpers_test.go:243: (dbg) docker inspect newest-cni-261704:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772",
	        "Created": "2025-11-23T08:59:23.410749327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1248882,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:00:00.896446691Z",
	            "FinishedAt": "2025-11-23T08:59:59.319264902Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/hosts",
	        "LogPath": "/var/lib/docker/containers/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772-json.log",
	        "Name": "/newest-cni-261704",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-261704:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-261704",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772",
	                "LowerDir": "/var/lib/docker/overlay2/2ae4e31f8fd303775b938dc2f321e4c26fcc60c4aaae4415c302dc6ffd0b5f37-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ae4e31f8fd303775b938dc2f321e4c26fcc60c4aaae4415c302dc6ffd0b5f37/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ae4e31f8fd303775b938dc2f321e4c26fcc60c4aaae4415c302dc6ffd0b5f37/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ae4e31f8fd303775b938dc2f321e4c26fcc60c4aaae4415c302dc6ffd0b5f37/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-261704",
	                "Source": "/var/lib/docker/volumes/newest-cni-261704/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-261704",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-261704",
	                "name.minikube.sigs.k8s.io": "newest-cni-261704",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "308ab2a57ea30708e2c8d60d8f6e6517b32d00d8912217449dc6a20bdf338427",
	            "SandboxKey": "/var/run/docker/netns/308ab2a57ea3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34552"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34553"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34556"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34554"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34555"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-261704": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:df:6a:b9:1c:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa1dd1d7d3d743751dd4838aea370419371be8ae8924c9730d80d4997d4494cf",
	                    "EndpointID": "139c9806d21631bae598319e905d01e3341ca35cde37d734be05401209ba306d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-261704",
	                        "b3bc5f529199"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-261704 -n newest-cni-261704
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-261704 -n newest-cni-261704: exit status 2 (421.228734ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-261704 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-261704 logs -n 25: (1.351051883s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-879861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p embed-certs-879861 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-879861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ image   │ default-k8s-diff-port-262764 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ pause   │ -p default-k8s-diff-port-262764 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p disable-driver-mounts-880590                                                                                                                                                                                                               │ disable-driver-mounts-880590 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:59 UTC │
	│ image   │ embed-certs-879861 image list --format=json                                                                                                                                                                                                   │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ pause   │ -p embed-certs-879861 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ delete  │ -p embed-certs-879861                                                                                                                                                                                                                         │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p embed-certs-879861                                                                                                                                                                                                                         │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ addons  │ enable metrics-server -p newest-cni-261704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-591175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ stop    │ -p newest-cni-261704 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ stop    │ -p no-preload-591175 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:00 UTC │
	│ addons  │ enable dashboard -p newest-cni-261704 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:00 UTC │
	│ addons  │ enable dashboard -p no-preload-591175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ start   │ -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ image   │ newest-cni-261704 image list --format=json                                                                                                                                                                                                    │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ pause   │ -p newest-cni-261704 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:00:12
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:00:12.280982 1250435 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:12.281166 1250435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:12.281195 1250435 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:12.281217 1250435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:12.281486 1250435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 09:00:12.287765 1250435 out.go:368] Setting JSON to false
	I1123 09:00:12.288797 1250435 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34958,"bootTime":1763853455,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 09:00:12.288919 1250435 start.go:143] virtualization:  
	I1123 09:00:12.291947 1250435 out.go:179] * [no-preload-591175] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:00:12.295773 1250435 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 09:00:12.295852 1250435 notify.go:221] Checking for updates...
	I1123 09:00:12.301710 1250435 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:00:12.304669 1250435 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 09:00:12.307693 1250435 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 09:00:12.311252 1250435 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:00:12.314144 1250435 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:00:12.317676 1250435 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:12.318378 1250435 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:00:12.373602 1250435 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:00:12.373770 1250435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:00:12.486103 1250435 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 09:00:12.475407975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:00:12.486203 1250435 docker.go:319] overlay module found
	I1123 09:00:12.489286 1250435 out.go:179] * Using the docker driver based on existing profile
	I1123 09:00:12.492224 1250435 start.go:309] selected driver: docker
	I1123 09:00:12.492248 1250435 start.go:927] validating driver "docker" against &{Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:00:12.492337 1250435 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:00:12.493026 1250435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:00:12.608594 1250435 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 09:00:12.595332025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:00:12.608898 1250435 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:00:12.608927 1250435 cni.go:84] Creating CNI manager for ""
	I1123 09:00:12.608983 1250435 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:00:12.609014 1250435 start.go:353] cluster config:
	{Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:00:12.612343 1250435 out.go:179] * Starting "no-preload-591175" primary control-plane node in "no-preload-591175" cluster
	I1123 09:00:12.615272 1250435 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:00:12.618188 1250435 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:00:12.621026 1250435 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:00:12.621170 1250435 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/config.json ...
	I1123 09:00:12.621476 1250435 cache.go:107] acquiring lock: {Name:mka2cb35964388564c4a147c0f220dec8bb32f92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.621556 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 09:00:12.621565 1250435 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 100.125µs
	I1123 09:00:12.621576 1250435 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 09:00:12.621588 1250435 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:00:12.621769 1250435 cache.go:107] acquiring lock: {Name:mk8f8894eb123f292e1befe37ca59025bf250750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.621818 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 09:00:12.621825 1250435 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 61.701µs
	I1123 09:00:12.621831 1250435 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 09:00:12.621841 1250435 cache.go:107] acquiring lock: {Name:mkfa049396ba1dee12c76864774f3aeacdb25dbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.621870 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 09:00:12.621875 1250435 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 35.15µs
	I1123 09:00:12.621880 1250435 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 09:00:12.621890 1250435 cache.go:107] acquiring lock: {Name:mked8fbb27666d48a91880577550b6d3c15d46c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.621919 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 09:00:12.621924 1250435 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 35.716µs
	I1123 09:00:12.621929 1250435 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 09:00:12.621938 1250435 cache.go:107] acquiring lock: {Name:mk78ea502d01db87a3fd0add08c07fa53ee3c177 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.621968 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 09:00:12.621973 1250435 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 35.724µs
	I1123 09:00:12.621980 1250435 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 09:00:12.621993 1250435 cache.go:107] acquiring lock: {Name:mk24b215fc8a1c4de845c20a5f8cbdfbdd48812c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.622021 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 09:00:12.622029 1250435 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 37.71µs
	I1123 09:00:12.622040 1250435 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 09:00:12.622053 1250435 cache.go:107] acquiring lock: {Name:mk5d6b1c9a54df439137e5ed9e773e09f1f35c7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.622085 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1123 09:00:12.622090 1250435 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 42.903µs
	I1123 09:00:12.622097 1250435 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 09:00:12.622119 1250435 cache.go:107] acquiring lock: {Name:mkd443765c9d6bedf54886650c57996d65552ffb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.622144 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 09:00:12.622149 1250435 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 31.803µs
	I1123 09:00:12.622155 1250435 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 09:00:12.622161 1250435 cache.go:87] Successfully saved all images to host disk.
	I1123 09:00:12.646153 1250435 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:00:12.646174 1250435 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:00:12.646189 1250435 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:00:12.646217 1250435 start.go:360] acquireMachinesLock for no-preload-591175: {Name:mk29286da1b052dc7b05c36520527aed8159771a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.646264 1250435 start.go:364] duration metric: took 32.828µs to acquireMachinesLock for "no-preload-591175"
	I1123 09:00:12.646282 1250435 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:00:12.646287 1250435 fix.go:54] fixHost starting: 
	I1123 09:00:12.646535 1250435 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 09:00:12.682051 1250435 fix.go:112] recreateIfNeeded on no-preload-591175: state=Stopped err=<nil>
	W1123 09:00:12.682083 1250435 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:00:15.017685 1248704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.877166375s)
	I1123 09:00:15.017741 1248704 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.859269386s)
	I1123 09:00:15.017765 1248704 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:00:15.017827 1248704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:00:15.017910 1248704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.699571729s)
	I1123 09:00:15.124878 1248704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.571514615s)
	I1123 09:00:15.125041 1248704 api_server.go:72] duration metric: took 6.386857727s to wait for apiserver process to appear ...
	I1123 09:00:15.125057 1248704 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:00:15.125098 1248704 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:00:15.127911 1248704 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-261704 addons enable metrics-server
	
	I1123 09:00:15.130893 1248704 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1123 09:00:15.133778 1248704 addons.go:530] duration metric: took 6.395267221s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1123 09:00:15.140036 1248704 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:00:15.141491 1248704 api_server.go:141] control plane version: v1.34.1
	I1123 09:00:15.141526 1248704 api_server.go:131] duration metric: took 16.461607ms to wait for apiserver health ...
	I1123 09:00:15.141535 1248704 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:00:15.150160 1248704 system_pods.go:59] 8 kube-system pods found
	I1123 09:00:15.150209 1248704 system_pods.go:61] "coredns-66bc5c9577-mdvx8" [aae4ba97-00dc-4620-818d-e571ed2a5b99] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:00:15.150218 1248704 system_pods.go:61] "etcd-newest-cni-261704" [ceed2430-2405-415c-9d8a-cbb9fec62bb3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:00:15.150227 1248704 system_pods.go:61] "kindnet-k7fsm" [7c5f3452-ed50-4a8d-82e3-51abceb3b21b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:00:15.150233 1248704 system_pods.go:61] "kube-apiserver-newest-cni-261704" [b69d74bd-25b5-478e-a10e-e2c0b67c51d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:00:15.150241 1248704 system_pods.go:61] "kube-controller-manager-newest-cni-261704" [6b736ad3-cf70-428d-aabf-8635b1b3fabd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:00:15.150250 1248704 system_pods.go:61] "kube-proxy-wp8vw" [36630050-6d8d-433a-a3bc-77fc44b8484e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:00:15.150267 1248704 system_pods.go:61] "kube-scheduler-newest-cni-261704" [c824e0c6-1c1a-48a1-b05a-114c05052710] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:00:15.150272 1248704 system_pods.go:61] "storage-provisioner" [2afa132f-b478-4d70-9125-e632f2084e4e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:00:15.150281 1248704 system_pods.go:74] duration metric: took 8.740742ms to wait for pod list to return data ...
	I1123 09:00:15.150291 1248704 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:00:15.155826 1248704 default_sa.go:45] found service account: "default"
	I1123 09:00:15.155896 1248704 default_sa.go:55] duration metric: took 5.594418ms for default service account to be created ...
	I1123 09:00:15.155923 1248704 kubeadm.go:587] duration metric: took 6.417739538s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:00:15.155968 1248704 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:00:15.183864 1248704 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:00:15.183904 1248704 node_conditions.go:123] node cpu capacity is 2
	I1123 09:00:15.183917 1248704 node_conditions.go:105] duration metric: took 27.926228ms to run NodePressure ...
	I1123 09:00:15.183930 1248704 start.go:242] waiting for startup goroutines ...
	I1123 09:00:15.183938 1248704 start.go:247] waiting for cluster config update ...
	I1123 09:00:15.183950 1248704 start.go:256] writing updated cluster config ...
	I1123 09:00:15.184257 1248704 ssh_runner.go:195] Run: rm -f paused
	I1123 09:00:15.256547 1248704 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:00:15.259546 1248704 out.go:179] * Done! kubectl is now configured to use "newest-cni-261704" cluster and "default" namespace by default
	I1123 09:00:12.685394 1250435 out.go:252] * Restarting existing docker container for "no-preload-591175" ...
	I1123 09:00:12.685494 1250435 cli_runner.go:164] Run: docker start no-preload-591175
	I1123 09:00:13.062975 1250435 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 09:00:13.092611 1250435 kic.go:430] container "no-preload-591175" state is running.
	I1123 09:00:13.092967 1250435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-591175
	I1123 09:00:13.119574 1250435 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/config.json ...
	I1123 09:00:13.119809 1250435 machine.go:94] provisionDockerMachine start ...
	I1123 09:00:13.119872 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:13.144768 1250435 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:13.145094 1250435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1123 09:00:13.145103 1250435 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:00:13.145649 1250435 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41552->127.0.0.1:34557: read: connection reset by peer
	I1123 09:00:16.346784 1250435 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-591175
	
	I1123 09:00:16.346806 1250435 ubuntu.go:182] provisioning hostname "no-preload-591175"
	I1123 09:00:16.346872 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:16.379282 1250435 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:16.379592 1250435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1123 09:00:16.379602 1250435 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-591175 && echo "no-preload-591175" | sudo tee /etc/hostname
	I1123 09:00:16.563295 1250435 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-591175
	
	I1123 09:00:16.563457 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:16.585031 1250435 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:16.585439 1250435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1123 09:00:16.585489 1250435 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-591175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-591175/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-591175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:00:16.750907 1250435 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:00:16.750998 1250435 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 09:00:16.751042 1250435 ubuntu.go:190] setting up certificates
	I1123 09:00:16.751081 1250435 provision.go:84] configureAuth start
	I1123 09:00:16.751214 1250435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-591175
	I1123 09:00:16.779021 1250435 provision.go:143] copyHostCerts
	I1123 09:00:16.779093 1250435 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 09:00:16.779106 1250435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 09:00:16.779202 1250435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 09:00:16.779318 1250435 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 09:00:16.779324 1250435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 09:00:16.779355 1250435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 09:00:16.779433 1250435 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 09:00:16.779444 1250435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 09:00:16.779473 1250435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 09:00:16.779536 1250435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.no-preload-591175 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-591175]
	I1123 09:00:16.902358 1250435 provision.go:177] copyRemoteCerts
	I1123 09:00:16.902542 1250435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:00:16.902601 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:16.919690 1250435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 09:00:17.028279 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:00:17.061703 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:00:17.095692 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:00:17.123940 1250435 provision.go:87] duration metric: took 372.820379ms to configureAuth
	I1123 09:00:17.124020 1250435 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:00:17.124272 1250435 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:17.124430 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:17.143534 1250435 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:17.143858 1250435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1123 09:00:17.143872 1250435 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.276608713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.283963894Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-wp8vw/POD" id=fab092dd-dce6-4606-8bc6-7272fd6ce4a7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.28404422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.293926879Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fab092dd-dce6-4606-8bc6-7272fd6ce4a7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.30030146Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=eebf59e6-c218-4256-a500-2883496eaa71 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.316412562Z" level=info msg="Ran pod sandbox 3b4d9cbd9c0e9c96b56dde65e3d268d50d01ef435d9bd52a6e7f09fd4a8604a9 with infra container: kube-system/kube-proxy-wp8vw/POD" id=fab092dd-dce6-4606-8bc6-7272fd6ce4a7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.316466124Z" level=info msg="Ran pod sandbox 0d75a72dc2f75698db986d484b53f1b087968b7cabf88ca78e89907e5aebd7f1 with infra container: kube-system/kindnet-k7fsm/POD" id=eebf59e6-c218-4256-a500-2883496eaa71 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.318184533Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=29496c33-312d-4afa-8d22-97fe897c359d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.319025937Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=43ba27a9-5004-43dc-9472-31fdc43067e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.319122049Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=51937e58-0359-4ff4-a668-9de83b202305 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.320224475Z" level=info msg="Creating container: kube-system/kindnet-k7fsm/kindnet-cni" id=a710c393-7049-482b-a879-418a1249c761 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.320325576Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.32448417Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=68b62a4b-5129-45f1-b463-99f86c9cd8fa name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.32543273Z" level=info msg="Creating container: kube-system/kube-proxy-wp8vw/kube-proxy" id=81f6bab0-74eb-4d3f-8a3f-ea0aab6339a7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.325539378Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.337491697Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.340305541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.342739089Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.347417127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.401334129Z" level=info msg="Created container 5bc07aa622b4f2cb6c6e329917932f22bf4e7146084f2b0def422d1c54e83a0c: kube-system/kindnet-k7fsm/kindnet-cni" id=a710c393-7049-482b-a879-418a1249c761 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.402054626Z" level=info msg="Starting container: 5bc07aa622b4f2cb6c6e329917932f22bf4e7146084f2b0def422d1c54e83a0c" id=c8027edc-d9f3-42d3-88db-d5bc389cb309 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.405530113Z" level=info msg="Started container" PID=1066 containerID=5bc07aa622b4f2cb6c6e329917932f22bf4e7146084f2b0def422d1c54e83a0c description=kube-system/kindnet-k7fsm/kindnet-cni id=c8027edc-d9f3-42d3-88db-d5bc389cb309 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d75a72dc2f75698db986d484b53f1b087968b7cabf88ca78e89907e5aebd7f1
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.421033618Z" level=info msg="Created container 9624a44b6ce5cc2c014b290df02810ce1759d2c0a54189542ed9456c4eb24a2a: kube-system/kube-proxy-wp8vw/kube-proxy" id=81f6bab0-74eb-4d3f-8a3f-ea0aab6339a7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.421747477Z" level=info msg="Starting container: 9624a44b6ce5cc2c014b290df02810ce1759d2c0a54189542ed9456c4eb24a2a" id=b4209844-8fb1-4800-879d-21d5174cf651 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.424441465Z" level=info msg="Started container" PID=1067 containerID=9624a44b6ce5cc2c014b290df02810ce1759d2c0a54189542ed9456c4eb24a2a description=kube-system/kube-proxy-wp8vw/kube-proxy id=b4209844-8fb1-4800-879d-21d5174cf651 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b4d9cbd9c0e9c96b56dde65e3d268d50d01ef435d9bd52a6e7f09fd4a8604a9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5bc07aa622b4f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   0d75a72dc2f75       kindnet-k7fsm                               kube-system
	9624a44b6ce5c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   3b4d9cbd9c0e9       kube-proxy-wp8vw                            kube-system
	3af4aa3938c00       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   831d5c1552685       kube-scheduler-newest-cni-261704            kube-system
	880d4ef1f66f6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   a6033c94909c0       kube-apiserver-newest-cni-261704            kube-system
	6f050244cada0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   6bb1646aa384c       kube-controller-manager-newest-cni-261704   kube-system
	f202c3fe478cd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   5b7fc25b4be89       etcd-newest-cni-261704                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-261704
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-261704
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=newest-cni-261704
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_59_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:59:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-261704
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:00:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:00:13 +0000   Sun, 23 Nov 2025 08:59:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:00:13 +0000   Sun, 23 Nov 2025 08:59:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:00:13 +0000   Sun, 23 Nov 2025 08:59:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 09:00:13 +0000   Sun, 23 Nov 2025 08:59:41 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-261704
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                56e8b4b2-8e75-46e0-8d33-48b3ccd6ced8
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-261704                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         30s
	  kube-system                 kindnet-k7fsm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-261704             250m (12%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-261704    200m (10%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-wp8vw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-261704             100m (5%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     30s                kubelet          Node newest-cni-261704 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    30s                kubelet          Node newest-cni-261704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  30s                kubelet          Node newest-cni-261704 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-261704 event: Registered Node newest-cni-261704 in Controller
	  Normal   Starting                 12s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node newest-cni-261704 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node newest-cni-261704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x8 over 11s)  kubelet          Node newest-cni-261704 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-261704 event: Registered Node newest-cni-261704 in Controller
	
	
	==> dmesg <==
	[Nov23 08:38] overlayfs: idmapped layers are currently not supported
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	[Nov23 08:55] overlayfs: idmapped layers are currently not supported
	[Nov23 08:56] overlayfs: idmapped layers are currently not supported
	[Nov23 08:57] overlayfs: idmapped layers are currently not supported
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	[ +37.440319] overlayfs: idmapped layers are currently not supported
	[Nov23 08:59] overlayfs: idmapped layers are currently not supported
	[Nov23 09:00] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f202c3fe478cd3e553a0db890621640c851e159694e8421e346372dfd05c53b6] <==
	{"level":"warn","ts":"2025-11-23T09:00:11.116863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.137453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.156505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.181984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.216165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.224168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.256245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.267932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.286096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.309288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.324095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.335861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.352478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.368946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.386320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.403370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.419692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.437753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.464105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.486506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.505237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.526170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.569257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.592633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.742698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55206","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:00:20 up  9:42,  0 user,  load average: 4.12, 3.47, 2.86
	Linux newest-cni-261704 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5bc07aa622b4f2cb6c6e329917932f22bf4e7146084f2b0def422d1c54e83a0c] <==
	I1123 09:00:14.474943       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:00:14.534649       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:00:14.534789       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:00:14.534811       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:00:14.534823       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:00:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:00:14.768597       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:00:14.769139       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:00:14.769190       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:00:14.776237       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [880d4ef1f66f609d81a2b6e4dbdc4df6a03f35b8f9d778a7ed11c71849e44600] <==
	I1123 09:00:13.502512       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 09:00:13.503044       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:00:13.507488       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 09:00:13.507589       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 09:00:13.507634       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:00:13.516635       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 09:00:13.516790       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:00:13.532121       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:00:13.544810       1 aggregator.go:171] initial CRD sync complete...
	I1123 09:00:13.544829       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:00:13.544837       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:00:13.544845       1 cache.go:39] Caches are synced for autoregister controller
	E1123 09:00:13.564831       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 09:00:13.731797       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:00:14.063875       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:00:14.270199       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:00:14.442212       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:00:14.547981       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:00:14.663043       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:00:15.083512       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.137.195"}
	I1123 09:00:15.113803       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.144.22"}
	I1123 09:00:16.093941       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:00:16.498719       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:00:16.544698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:00:16.833027       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [6f050244cada0b5b8f4416fbf67c393c75b4f649d2e93d425648834a3cac6d0f] <==
	I1123 09:00:16.080258       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:00:16.082782       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:00:16.082816       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:00:16.087274       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:00:16.090476       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:00:16.090563       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 09:00:16.090601       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 09:00:16.090628       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:00:16.090688       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:00:16.090758       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:00:16.092914       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:00:16.098235       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:00:16.101147       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:00:16.101259       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:00:16.101352       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-261704"
	I1123 09:00:16.101406       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 09:00:16.101451       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:00:16.117057       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:00:16.123860       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:00:16.123884       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:00:16.123890       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:00:16.125063       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:16.138483       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:00:16.143715       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 09:00:16.161111       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [9624a44b6ce5cc2c014b290df02810ce1759d2c0a54189542ed9456c4eb24a2a] <==
	I1123 09:00:15.059527       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:00:15.168963       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:00:15.271711       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:00:15.272792       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 09:00:15.272931       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:00:15.364520       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:00:15.364637       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:00:15.368645       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:00:15.368991       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:00:15.369154       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:15.374031       1 config.go:200] "Starting service config controller"
	I1123 09:00:15.374090       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:00:15.374226       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:00:15.374261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:00:15.374460       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:00:15.374499       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:00:15.375161       1 config.go:309] "Starting node config controller"
	I1123 09:00:15.375265       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:00:15.375297       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:00:15.483310       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:00:15.483346       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:00:15.483391       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3af4aa3938c00ead58f3f54203d3fd9e33f73b03851da51e786f34d10ff67ee9] <==
	I1123 09:00:11.986685       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:00:14.872179       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:00:14.872212       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:14.903958       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 09:00:14.904080       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 09:00:14.904194       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:00:14.904224       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:00:14.904240       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:14.915665       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:14.904251       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:00:14.915801       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:00:15.014899       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 09:00:15.024077       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:15.025008       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.575590     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.601135     732 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.601263     732 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.601296     732 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.604197     732 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: E1123 09:00:13.642252     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-261704\" already exists" pod="kube-system/etcd-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.642449     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: E1123 09:00:13.689459     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-261704\" already exists" pod="kube-system/kube-apiserver-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.689513     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: E1123 09:00:13.732790     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-261704\" already exists" pod="kube-system/kube-controller-manager-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.732849     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: E1123 09:00:13.767650     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-261704\" already exists" pod="kube-system/kube-scheduler-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.962698     732 apiserver.go:52] "Watching apiserver"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.983648     732 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: I1123 09:00:14.050052     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c5f3452-ed50-4a8d-82e3-51abceb3b21b-lib-modules\") pod \"kindnet-k7fsm\" (UID: \"7c5f3452-ed50-4a8d-82e3-51abceb3b21b\") " pod="kube-system/kindnet-k7fsm"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: I1123 09:00:14.050182     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c5f3452-ed50-4a8d-82e3-51abceb3b21b-xtables-lock\") pod \"kindnet-k7fsm\" (UID: \"7c5f3452-ed50-4a8d-82e3-51abceb3b21b\") " pod="kube-system/kindnet-k7fsm"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: I1123 09:00:14.050220     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7c5f3452-ed50-4a8d-82e3-51abceb3b21b-cni-cfg\") pod \"kindnet-k7fsm\" (UID: \"7c5f3452-ed50-4a8d-82e3-51abceb3b21b\") " pod="kube-system/kindnet-k7fsm"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: I1123 09:00:14.050283     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36630050-6d8d-433a-a3bc-77fc44b8484e-xtables-lock\") pod \"kube-proxy-wp8vw\" (UID: \"36630050-6d8d-433a-a3bc-77fc44b8484e\") " pod="kube-system/kube-proxy-wp8vw"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: I1123 09:00:14.050304     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36630050-6d8d-433a-a3bc-77fc44b8484e-lib-modules\") pod \"kube-proxy-wp8vw\" (UID: \"36630050-6d8d-433a-a3bc-77fc44b8484e\") " pod="kube-system/kube-proxy-wp8vw"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: I1123 09:00:14.095794     732 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: W1123 09:00:14.312582     732 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/crio-3b4d9cbd9c0e9c96b56dde65e3d268d50d01ef435d9bd52a6e7f09fd4a8604a9 WatchSource:0}: Error finding container 3b4d9cbd9c0e9c96b56dde65e3d268d50d01ef435d9bd52a6e7f09fd4a8604a9: Status 404 returned error can't find the container with id 3b4d9cbd9c0e9c96b56dde65e3d268d50d01ef435d9bd52a6e7f09fd4a8604a9
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: W1123 09:00:14.313406     732 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/crio-0d75a72dc2f75698db986d484b53f1b087968b7cabf88ca78e89907e5aebd7f1 WatchSource:0}: Error finding container 0d75a72dc2f75698db986d484b53f1b087968b7cabf88ca78e89907e5aebd7f1: Status 404 returned error can't find the container with id 0d75a72dc2f75698db986d484b53f1b087968b7cabf88ca78e89907e5aebd7f1
	Nov 23 09:00:16 newest-cni-261704 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:00:16 newest-cni-261704 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:00:16 newest-cni-261704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-261704 -n newest-cni-261704
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-261704 -n newest-cni-261704: exit status 2 (495.123534ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-261704 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-mdvx8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jzcrk kubernetes-dashboard-855c9754f9-6xcjk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-261704 describe pod coredns-66bc5c9577-mdvx8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jzcrk kubernetes-dashboard-855c9754f9-6xcjk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-261704 describe pod coredns-66bc5c9577-mdvx8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jzcrk kubernetes-dashboard-855c9754f9-6xcjk: exit status 1 (139.275469ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-mdvx8" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-jzcrk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-6xcjk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-261704 describe pod coredns-66bc5c9577-mdvx8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jzcrk kubernetes-dashboard-855c9754f9-6xcjk: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-261704
helpers_test.go:243: (dbg) docker inspect newest-cni-261704:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772",
	        "Created": "2025-11-23T08:59:23.410749327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1248882,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:00:00.896446691Z",
	            "FinishedAt": "2025-11-23T08:59:59.319264902Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/hosts",
	        "LogPath": "/var/lib/docker/containers/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772-json.log",
	        "Name": "/newest-cni-261704",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-261704:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-261704",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772",
	                "LowerDir": "/var/lib/docker/overlay2/2ae4e31f8fd303775b938dc2f321e4c26fcc60c4aaae4415c302dc6ffd0b5f37-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ae4e31f8fd303775b938dc2f321e4c26fcc60c4aaae4415c302dc6ffd0b5f37/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ae4e31f8fd303775b938dc2f321e4c26fcc60c4aaae4415c302dc6ffd0b5f37/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ae4e31f8fd303775b938dc2f321e4c26fcc60c4aaae4415c302dc6ffd0b5f37/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-261704",
	                "Source": "/var/lib/docker/volumes/newest-cni-261704/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-261704",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-261704",
	                "name.minikube.sigs.k8s.io": "newest-cni-261704",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "308ab2a57ea30708e2c8d60d8f6e6517b32d00d8912217449dc6a20bdf338427",
	            "SandboxKey": "/var/run/docker/netns/308ab2a57ea3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34552"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34553"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34556"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34554"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34555"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-261704": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:df:6a:b9:1c:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa1dd1d7d3d743751dd4838aea370419371be8ae8924c9730d80d4997d4494cf",
	                    "EndpointID": "139c9806d21631bae598319e905d01e3341ca35cde37d734be05401209ba306d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-261704",
	                        "b3bc5f529199"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-261704 -n newest-cni-261704
E1123 09:00:21.854981 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-261704 -n newest-cni-261704: exit status 2 (628.356498ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-261704 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-261704 logs -n 25: (1.636885973s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-879861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ stop    │ -p embed-certs-879861 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable dashboard -p embed-certs-879861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ image   │ default-k8s-diff-port-262764 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ pause   │ -p default-k8s-diff-port-262764 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p disable-driver-mounts-880590                                                                                                                                                                                                               │ disable-driver-mounts-880590 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:59 UTC │
	│ image   │ embed-certs-879861 image list --format=json                                                                                                                                                                                                   │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ pause   │ -p embed-certs-879861 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ delete  │ -p embed-certs-879861                                                                                                                                                                                                                         │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p embed-certs-879861                                                                                                                                                                                                                         │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ addons  │ enable metrics-server -p newest-cni-261704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-591175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ stop    │ -p newest-cni-261704 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ stop    │ -p no-preload-591175 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:00 UTC │
	│ addons  │ enable dashboard -p newest-cni-261704 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:00 UTC │
	│ addons  │ enable dashboard -p no-preload-591175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ start   │ -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ image   │ newest-cni-261704 image list --format=json                                                                                                                                                                                                    │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ pause   │ -p newest-cni-261704 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:00:12
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:00:12.280982 1250435 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:12.281166 1250435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:12.281195 1250435 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:12.281217 1250435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:12.281486 1250435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 09:00:12.287765 1250435 out.go:368] Setting JSON to false
	I1123 09:00:12.288797 1250435 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34958,"bootTime":1763853455,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 09:00:12.288919 1250435 start.go:143] virtualization:  
	I1123 09:00:12.291947 1250435 out.go:179] * [no-preload-591175] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:00:12.295773 1250435 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 09:00:12.295852 1250435 notify.go:221] Checking for updates...
	I1123 09:00:12.301710 1250435 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:00:12.304669 1250435 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 09:00:12.307693 1250435 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 09:00:12.311252 1250435 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:00:12.314144 1250435 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:00:12.317676 1250435 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:12.318378 1250435 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:00:12.373602 1250435 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:00:12.373770 1250435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:00:12.486103 1250435 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 09:00:12.475407975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:00:12.486203 1250435 docker.go:319] overlay module found
	I1123 09:00:12.489286 1250435 out.go:179] * Using the docker driver based on existing profile
	I1123 09:00:12.492224 1250435 start.go:309] selected driver: docker
	I1123 09:00:12.492248 1250435 start.go:927] validating driver "docker" against &{Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:00:12.492337 1250435 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:00:12.493026 1250435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:00:12.608594 1250435 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 09:00:12.595332025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:00:12.608898 1250435 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:00:12.608927 1250435 cni.go:84] Creating CNI manager for ""
	I1123 09:00:12.608983 1250435 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:00:12.609014 1250435 start.go:353] cluster config:
	{Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:00:12.612343 1250435 out.go:179] * Starting "no-preload-591175" primary control-plane node in "no-preload-591175" cluster
	I1123 09:00:12.615272 1250435 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:00:12.618188 1250435 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:00:12.621026 1250435 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:00:12.621170 1250435 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/config.json ...
	I1123 09:00:12.621476 1250435 cache.go:107] acquiring lock: {Name:mka2cb35964388564c4a147c0f220dec8bb32f92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.621556 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 09:00:12.621565 1250435 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 100.125µs
	I1123 09:00:12.621576 1250435 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 09:00:12.621588 1250435 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:00:12.621769 1250435 cache.go:107] acquiring lock: {Name:mk8f8894eb123f292e1befe37ca59025bf250750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.621818 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 09:00:12.621825 1250435 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 61.701µs
	I1123 09:00:12.621831 1250435 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 09:00:12.621841 1250435 cache.go:107] acquiring lock: {Name:mkfa049396ba1dee12c76864774f3aeacdb25dbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.621870 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 09:00:12.621875 1250435 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 35.15µs
	I1123 09:00:12.621880 1250435 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 09:00:12.621890 1250435 cache.go:107] acquiring lock: {Name:mked8fbb27666d48a91880577550b6d3c15d46c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.621919 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 09:00:12.621924 1250435 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 35.716µs
	I1123 09:00:12.621929 1250435 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 09:00:12.621938 1250435 cache.go:107] acquiring lock: {Name:mk78ea502d01db87a3fd0add08c07fa53ee3c177 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.621968 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 09:00:12.621973 1250435 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 35.724µs
	I1123 09:00:12.621980 1250435 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 09:00:12.621993 1250435 cache.go:107] acquiring lock: {Name:mk24b215fc8a1c4de845c20a5f8cbdfbdd48812c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.622021 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 09:00:12.622029 1250435 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 37.71µs
	I1123 09:00:12.622040 1250435 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 09:00:12.622053 1250435 cache.go:107] acquiring lock: {Name:mk5d6b1c9a54df439137e5ed9e773e09f1f35c7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.622085 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1123 09:00:12.622090 1250435 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 42.903µs
	I1123 09:00:12.622097 1250435 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 09:00:12.622119 1250435 cache.go:107] acquiring lock: {Name:mkd443765c9d6bedf54886650c57996d65552ffb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.622144 1250435 cache.go:115] /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 09:00:12.622149 1250435 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 31.803µs
	I1123 09:00:12.622155 1250435 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 09:00:12.622161 1250435 cache.go:87] Successfully saved all images to host disk.
	I1123 09:00:12.646153 1250435 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:00:12.646174 1250435 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:00:12.646189 1250435 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:00:12.646217 1250435 start.go:360] acquireMachinesLock for no-preload-591175: {Name:mk29286da1b052dc7b05c36520527aed8159771a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:12.646264 1250435 start.go:364] duration metric: took 32.828µs to acquireMachinesLock for "no-preload-591175"
	I1123 09:00:12.646282 1250435 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:00:12.646287 1250435 fix.go:54] fixHost starting: 
	I1123 09:00:12.646535 1250435 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 09:00:12.682051 1250435 fix.go:112] recreateIfNeeded on no-preload-591175: state=Stopped err=<nil>
	W1123 09:00:12.682083 1250435 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:00:15.017685 1248704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.877166375s)
	I1123 09:00:15.017741 1248704 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.859269386s)
	I1123 09:00:15.017765 1248704 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:00:15.017827 1248704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:00:15.017910 1248704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.699571729s)
	I1123 09:00:15.124878 1248704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.571514615s)
	I1123 09:00:15.125041 1248704 api_server.go:72] duration metric: took 6.386857727s to wait for apiserver process to appear ...
	I1123 09:00:15.125057 1248704 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:00:15.125098 1248704 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:00:15.127911 1248704 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-261704 addons enable metrics-server
	
	I1123 09:00:15.130893 1248704 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1123 09:00:15.133778 1248704 addons.go:530] duration metric: took 6.395267221s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1123 09:00:15.140036 1248704 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:00:15.141491 1248704 api_server.go:141] control plane version: v1.34.1
	I1123 09:00:15.141526 1248704 api_server.go:131] duration metric: took 16.461607ms to wait for apiserver health ...
	I1123 09:00:15.141535 1248704 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:00:15.150160 1248704 system_pods.go:59] 8 kube-system pods found
	I1123 09:00:15.150209 1248704 system_pods.go:61] "coredns-66bc5c9577-mdvx8" [aae4ba97-00dc-4620-818d-e571ed2a5b99] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:00:15.150218 1248704 system_pods.go:61] "etcd-newest-cni-261704" [ceed2430-2405-415c-9d8a-cbb9fec62bb3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:00:15.150227 1248704 system_pods.go:61] "kindnet-k7fsm" [7c5f3452-ed50-4a8d-82e3-51abceb3b21b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:00:15.150233 1248704 system_pods.go:61] "kube-apiserver-newest-cni-261704" [b69d74bd-25b5-478e-a10e-e2c0b67c51d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:00:15.150241 1248704 system_pods.go:61] "kube-controller-manager-newest-cni-261704" [6b736ad3-cf70-428d-aabf-8635b1b3fabd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:00:15.150250 1248704 system_pods.go:61] "kube-proxy-wp8vw" [36630050-6d8d-433a-a3bc-77fc44b8484e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:00:15.150267 1248704 system_pods.go:61] "kube-scheduler-newest-cni-261704" [c824e0c6-1c1a-48a1-b05a-114c05052710] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:00:15.150272 1248704 system_pods.go:61] "storage-provisioner" [2afa132f-b478-4d70-9125-e632f2084e4e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:00:15.150281 1248704 system_pods.go:74] duration metric: took 8.740742ms to wait for pod list to return data ...
	I1123 09:00:15.150291 1248704 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:00:15.155826 1248704 default_sa.go:45] found service account: "default"
	I1123 09:00:15.155896 1248704 default_sa.go:55] duration metric: took 5.594418ms for default service account to be created ...
	I1123 09:00:15.155923 1248704 kubeadm.go:587] duration metric: took 6.417739538s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:00:15.155968 1248704 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:00:15.183864 1248704 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:00:15.183904 1248704 node_conditions.go:123] node cpu capacity is 2
	I1123 09:00:15.183917 1248704 node_conditions.go:105] duration metric: took 27.926228ms to run NodePressure ...
	I1123 09:00:15.183930 1248704 start.go:242] waiting for startup goroutines ...
	I1123 09:00:15.183938 1248704 start.go:247] waiting for cluster config update ...
	I1123 09:00:15.183950 1248704 start.go:256] writing updated cluster config ...
	I1123 09:00:15.184257 1248704 ssh_runner.go:195] Run: rm -f paused
	I1123 09:00:15.256547 1248704 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:00:15.259546 1248704 out.go:179] * Done! kubectl is now configured to use "newest-cni-261704" cluster and "default" namespace by default
	I1123 09:00:12.685394 1250435 out.go:252] * Restarting existing docker container for "no-preload-591175" ...
	I1123 09:00:12.685494 1250435 cli_runner.go:164] Run: docker start no-preload-591175
	I1123 09:00:13.062975 1250435 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 09:00:13.092611 1250435 kic.go:430] container "no-preload-591175" state is running.
	I1123 09:00:13.092967 1250435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-591175
	I1123 09:00:13.119574 1250435 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/config.json ...
	I1123 09:00:13.119809 1250435 machine.go:94] provisionDockerMachine start ...
	I1123 09:00:13.119872 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:13.144768 1250435 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:13.145094 1250435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1123 09:00:13.145103 1250435 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:00:13.145649 1250435 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41552->127.0.0.1:34557: read: connection reset by peer
	I1123 09:00:16.346784 1250435 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-591175
	
	I1123 09:00:16.346806 1250435 ubuntu.go:182] provisioning hostname "no-preload-591175"
	I1123 09:00:16.346872 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:16.379282 1250435 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:16.379592 1250435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1123 09:00:16.379602 1250435 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-591175 && echo "no-preload-591175" | sudo tee /etc/hostname
	I1123 09:00:16.563295 1250435 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-591175
	
	I1123 09:00:16.563457 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:16.585031 1250435 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:16.585439 1250435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1123 09:00:16.585489 1250435 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-591175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-591175/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-591175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:00:16.750907 1250435 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:00:16.750998 1250435 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 09:00:16.751042 1250435 ubuntu.go:190] setting up certificates
	I1123 09:00:16.751081 1250435 provision.go:84] configureAuth start
	I1123 09:00:16.751214 1250435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-591175
	I1123 09:00:16.779021 1250435 provision.go:143] copyHostCerts
	I1123 09:00:16.779093 1250435 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 09:00:16.779106 1250435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 09:00:16.779202 1250435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 09:00:16.779318 1250435 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 09:00:16.779324 1250435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 09:00:16.779355 1250435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 09:00:16.779433 1250435 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 09:00:16.779444 1250435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 09:00:16.779473 1250435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 09:00:16.779536 1250435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.no-preload-591175 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-591175]
	I1123 09:00:16.902358 1250435 provision.go:177] copyRemoteCerts
	I1123 09:00:16.902542 1250435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:00:16.902601 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:16.919690 1250435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 09:00:17.028279 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:00:17.061703 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:00:17.095692 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:00:17.123940 1250435 provision.go:87] duration metric: took 372.820379ms to configureAuth
	I1123 09:00:17.124020 1250435 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:00:17.124272 1250435 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:17.124430 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:17.143534 1250435 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:17.143858 1250435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34557 <nil> <nil>}
	I1123 09:00:17.143872 1250435 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:00:17.504586 1250435 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:00:17.504610 1250435 machine.go:97] duration metric: took 4.384791915s to provisionDockerMachine
	I1123 09:00:17.504621 1250435 start.go:293] postStartSetup for "no-preload-591175" (driver="docker")
	I1123 09:00:17.504632 1250435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:00:17.504757 1250435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:00:17.504836 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:17.526533 1250435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 09:00:17.635709 1250435 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:00:17.640829 1250435 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:00:17.640854 1250435 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:00:17.640866 1250435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 09:00:17.640916 1250435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 09:00:17.640995 1250435 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 09:00:17.641098 1250435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:00:17.648984 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 09:00:17.667558 1250435 start.go:296] duration metric: took 162.921712ms for postStartSetup
	I1123 09:00:17.667639 1250435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:00:17.667698 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:17.693870 1250435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 09:00:17.813587 1250435 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:00:17.819305 1250435 fix.go:56] duration metric: took 5.173011987s for fixHost
	I1123 09:00:17.819331 1250435 start.go:83] releasing machines lock for "no-preload-591175", held for 5.173059034s
	I1123 09:00:17.819410 1250435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-591175
	I1123 09:00:17.844832 1250435 ssh_runner.go:195] Run: cat /version.json
	I1123 09:00:17.844881 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:17.845130 1250435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:00:17.845184 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:17.879368 1250435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 09:00:17.892662 1250435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 09:00:17.994932 1250435 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:18.087017 1250435 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:00:18.124441 1250435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:00:18.129353 1250435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:00:18.129422 1250435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:00:18.137184 1250435 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:00:18.137209 1250435 start.go:496] detecting cgroup driver to use...
	I1123 09:00:18.137273 1250435 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:00:18.137351 1250435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:00:18.152970 1250435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:00:18.165894 1250435 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:00:18.166010 1250435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:00:18.181596 1250435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:00:18.195141 1250435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:00:18.346736 1250435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:00:18.499098 1250435 docker.go:234] disabling docker service ...
	I1123 09:00:18.499204 1250435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:00:18.516742 1250435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:00:18.530978 1250435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:00:18.692031 1250435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:00:18.859632 1250435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:00:18.876732 1250435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:00:18.894268 1250435 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:00:18.894340 1250435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:18.905838 1250435 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:00:18.905904 1250435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:18.919767 1250435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:18.929579 1250435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:18.939485 1250435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:00:18.950789 1250435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:18.970600 1250435 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:18.982065 1250435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:18.999377 1250435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:00:19.011536 1250435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:00:19.028315 1250435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:19.200768 1250435 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:00:19.392839 1250435 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:00:19.392907 1250435 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:00:19.397985 1250435 start.go:564] Will wait 60s for crictl version
	I1123 09:00:19.398049 1250435 ssh_runner.go:195] Run: which crictl
	I1123 09:00:19.402350 1250435 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:00:19.433620 1250435 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:00:19.433728 1250435 ssh_runner.go:195] Run: crio --version
	I1123 09:00:19.471099 1250435 ssh_runner.go:195] Run: crio --version
	I1123 09:00:19.524435 1250435 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:00:19.527358 1250435 cli_runner.go:164] Run: docker network inspect no-preload-591175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:00:19.543380 1250435 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 09:00:19.548333 1250435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:00:19.559806 1250435 kubeadm.go:884] updating cluster {Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:00:19.559924 1250435 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:00:19.559969 1250435 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:00:19.617720 1250435 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:00:19.617756 1250435 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:00:19.617765 1250435 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 09:00:19.617901 1250435 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-591175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:00:19.618003 1250435 ssh_runner.go:195] Run: crio config
	I1123 09:00:19.700315 1250435 cni.go:84] Creating CNI manager for ""
	I1123 09:00:19.700342 1250435 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:00:19.700373 1250435 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:00:19.700411 1250435 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-591175 NodeName:no-preload-591175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:00:19.700600 1250435 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-591175"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:00:19.700697 1250435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:00:19.712640 1250435 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:00:19.712728 1250435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:00:19.724219 1250435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 09:00:19.743742 1250435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:00:19.757630 1250435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 09:00:19.781316 1250435 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:00:19.785897 1250435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:00:19.797839 1250435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:19.944031 1250435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:00:19.967772 1250435 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175 for IP: 192.168.85.2
	I1123 09:00:19.967796 1250435 certs.go:195] generating shared ca certs ...
	I1123 09:00:19.967834 1250435 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:19.968017 1250435 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 09:00:19.968142 1250435 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 09:00:19.968159 1250435 certs.go:257] generating profile certs ...
	I1123 09:00:19.968279 1250435 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.key
	I1123 09:00:19.968402 1250435 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key.0b835375
	I1123 09:00:19.968476 1250435 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.key
	I1123 09:00:19.968628 1250435 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 09:00:19.968697 1250435 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 09:00:19.968712 1250435 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:00:19.968772 1250435 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:00:19.968821 1250435 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:00:19.968872 1250435 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 09:00:19.968947 1250435 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 09:00:19.969787 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:00:20.034698 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:00:20.070929 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:00:20.114913 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 09:00:20.159920 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 09:00:20.191845 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:00:20.222622 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:00:20.263168 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:00:20.292951 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:00:20.326096 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 09:00:20.351144 1250435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 09:00:20.383952 1250435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:00:20.420863 1250435 ssh_runner.go:195] Run: openssl version
	I1123 09:00:20.427980 1250435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:00:20.444966 1250435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:00:20.450614 1250435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:00:20.450675 1250435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:00:20.494587 1250435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:00:20.502788 1250435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 09:00:20.514389 1250435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 09:00:20.519529 1250435 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 09:00:20.519604 1250435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 09:00:20.569178 1250435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 09:00:20.582380 1250435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 09:00:20.592537 1250435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 09:00:20.597841 1250435 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 09:00:20.597910 1250435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 09:00:20.647378 1250435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:00:20.657706 1250435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:00:20.661735 1250435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:00:20.720795 1250435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:00:20.806925 1250435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:00:20.899177 1250435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:00:21.008019 1250435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:00:21.131448 1250435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:00:21.216976 1250435 kubeadm.go:401] StartCluster: {Name:no-preload-591175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-591175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:00:21.217074 1250435 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:00:21.217144 1250435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:00:21.301856 1250435 cri.go:89] found id: "9176ef57780eecfb0be6625d611ecda108774756a7a7ba2e04cae7ba6631a68b"
	I1123 09:00:21.301874 1250435 cri.go:89] found id: "157d0e0fd3e72e28588020ec573e4dedd42cd637d9021c7aaf88f84bb1ff9ca6"
	I1123 09:00:21.301878 1250435 cri.go:89] found id: "aebf3ba174ff52b2d9016df7e7c2a73bddd769ac238a51aeefd85b75d890f557"
	I1123 09:00:21.301882 1250435 cri.go:89] found id: "84f5d17f9123dfb226e15a389bc9a5e5b2de8b259f1186f86f2f3673b2895055"
	I1123 09:00:21.301885 1250435 cri.go:89] found id: ""
	I1123 09:00:21.301938 1250435 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:00:21.337808 1250435 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:21Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:00:21.337888 1250435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:00:21.354993 1250435 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:00:21.355009 1250435 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:00:21.355067 1250435 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:00:21.372582 1250435 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:00:21.373114 1250435 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-591175" does not appear in /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 09:00:21.373373 1250435 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-1041293/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-591175" cluster setting kubeconfig missing "no-preload-591175" context setting]
	I1123 09:00:21.373805 1250435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:21.375159 1250435 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:00:21.392721 1250435 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 09:00:21.392753 1250435 kubeadm.go:602] duration metric: took 37.737634ms to restartPrimaryControlPlane
	I1123 09:00:21.392762 1250435 kubeadm.go:403] duration metric: took 175.798032ms to StartCluster
	I1123 09:00:21.392777 1250435 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:21.392839 1250435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 09:00:21.393776 1250435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:21.393981 1250435 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:00:21.394372 1250435 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:21.394418 1250435 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:00:21.394482 1250435 addons.go:70] Setting storage-provisioner=true in profile "no-preload-591175"
	I1123 09:00:21.394496 1250435 addons.go:239] Setting addon storage-provisioner=true in "no-preload-591175"
	W1123 09:00:21.394507 1250435 addons.go:248] addon storage-provisioner should already be in state true
	I1123 09:00:21.394529 1250435 host.go:66] Checking if "no-preload-591175" exists ...
	I1123 09:00:21.395036 1250435 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 09:00:21.397316 1250435 addons.go:70] Setting dashboard=true in profile "no-preload-591175"
	I1123 09:00:21.397347 1250435 addons.go:239] Setting addon dashboard=true in "no-preload-591175"
	W1123 09:00:21.397353 1250435 addons.go:248] addon dashboard should already be in state true
	I1123 09:00:21.397379 1250435 host.go:66] Checking if "no-preload-591175" exists ...
	I1123 09:00:21.397989 1250435 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 09:00:21.400580 1250435 addons.go:70] Setting default-storageclass=true in profile "no-preload-591175"
	I1123 09:00:21.400609 1250435 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-591175"
	I1123 09:00:21.400919 1250435 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 09:00:21.402200 1250435 out.go:179] * Verifying Kubernetes components...
	I1123 09:00:21.414950 1250435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:21.479380 1250435 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:00:21.483054 1250435 addons.go:239] Setting addon default-storageclass=true in "no-preload-591175"
	W1123 09:00:21.487288 1250435 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:00:21.487319 1250435 host.go:66] Checking if "no-preload-591175" exists ...
	I1123 09:00:21.487782 1250435 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 09:00:21.483189 1250435 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:21.488013 1250435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:00:21.488058 1250435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:00:21.487241 1250435 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 09:00:21.496618 1250435 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	
	
	==> CRI-O <==
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.276608713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.283963894Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-wp8vw/POD" id=fab092dd-dce6-4606-8bc6-7272fd6ce4a7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.28404422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.293926879Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fab092dd-dce6-4606-8bc6-7272fd6ce4a7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.30030146Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=eebf59e6-c218-4256-a500-2883496eaa71 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.316412562Z" level=info msg="Ran pod sandbox 3b4d9cbd9c0e9c96b56dde65e3d268d50d01ef435d9bd52a6e7f09fd4a8604a9 with infra container: kube-system/kube-proxy-wp8vw/POD" id=fab092dd-dce6-4606-8bc6-7272fd6ce4a7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.316466124Z" level=info msg="Ran pod sandbox 0d75a72dc2f75698db986d484b53f1b087968b7cabf88ca78e89907e5aebd7f1 with infra container: kube-system/kindnet-k7fsm/POD" id=eebf59e6-c218-4256-a500-2883496eaa71 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.318184533Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=29496c33-312d-4afa-8d22-97fe897c359d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.319025937Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=43ba27a9-5004-43dc-9472-31fdc43067e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.319122049Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=51937e58-0359-4ff4-a668-9de83b202305 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.320224475Z" level=info msg="Creating container: kube-system/kindnet-k7fsm/kindnet-cni" id=a710c393-7049-482b-a879-418a1249c761 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.320325576Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.32448417Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=68b62a4b-5129-45f1-b463-99f86c9cd8fa name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.32543273Z" level=info msg="Creating container: kube-system/kube-proxy-wp8vw/kube-proxy" id=81f6bab0-74eb-4d3f-8a3f-ea0aab6339a7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.325539378Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.337491697Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.340305541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.342739089Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.347417127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.401334129Z" level=info msg="Created container 5bc07aa622b4f2cb6c6e329917932f22bf4e7146084f2b0def422d1c54e83a0c: kube-system/kindnet-k7fsm/kindnet-cni" id=a710c393-7049-482b-a879-418a1249c761 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.402054626Z" level=info msg="Starting container: 5bc07aa622b4f2cb6c6e329917932f22bf4e7146084f2b0def422d1c54e83a0c" id=c8027edc-d9f3-42d3-88db-d5bc389cb309 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.405530113Z" level=info msg="Started container" PID=1066 containerID=5bc07aa622b4f2cb6c6e329917932f22bf4e7146084f2b0def422d1c54e83a0c description=kube-system/kindnet-k7fsm/kindnet-cni id=c8027edc-d9f3-42d3-88db-d5bc389cb309 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d75a72dc2f75698db986d484b53f1b087968b7cabf88ca78e89907e5aebd7f1
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.421033618Z" level=info msg="Created container 9624a44b6ce5cc2c014b290df02810ce1759d2c0a54189542ed9456c4eb24a2a: kube-system/kube-proxy-wp8vw/kube-proxy" id=81f6bab0-74eb-4d3f-8a3f-ea0aab6339a7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.421747477Z" level=info msg="Starting container: 9624a44b6ce5cc2c014b290df02810ce1759d2c0a54189542ed9456c4eb24a2a" id=b4209844-8fb1-4800-879d-21d5174cf651 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:00:14 newest-cni-261704 crio[613]: time="2025-11-23T09:00:14.424441465Z" level=info msg="Started container" PID=1067 containerID=9624a44b6ce5cc2c014b290df02810ce1759d2c0a54189542ed9456c4eb24a2a description=kube-system/kube-proxy-wp8vw/kube-proxy id=b4209844-8fb1-4800-879d-21d5174cf651 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b4d9cbd9c0e9c96b56dde65e3d268d50d01ef435d9bd52a6e7f09fd4a8604a9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5bc07aa622b4f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   8 seconds ago       Running             kindnet-cni               1                   0d75a72dc2f75       kindnet-k7fsm                               kube-system
	9624a44b6ce5c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   3b4d9cbd9c0e9       kube-proxy-wp8vw                            kube-system
	3af4aa3938c00       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   831d5c1552685       kube-scheduler-newest-cni-261704            kube-system
	880d4ef1f66f6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   a6033c94909c0       kube-apiserver-newest-cni-261704            kube-system
	6f050244cada0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   6bb1646aa384c       kube-controller-manager-newest-cni-261704   kube-system
	f202c3fe478cd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   5b7fc25b4be89       etcd-newest-cni-261704                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-261704
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-261704
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=newest-cni-261704
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_59_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:59:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-261704
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:00:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:00:13 +0000   Sun, 23 Nov 2025 08:59:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:00:13 +0000   Sun, 23 Nov 2025 08:59:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:00:13 +0000   Sun, 23 Nov 2025 08:59:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 09:00:13 +0000   Sun, 23 Nov 2025 08:59:41 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-261704
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                56e8b4b2-8e75-46e0-8d33-48b3ccd6ced8
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-261704                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-k7fsm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-261704             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-261704    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-wp8vw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-261704             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     34s                kubelet          Node newest-cni-261704 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-261704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node newest-cni-261704 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           31s                node-controller  Node newest-cni-261704 event: Registered Node newest-cni-261704 in Controller
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-261704 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-261704 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-261704 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7s                 node-controller  Node newest-cni-261704 event: Registered Node newest-cni-261704 in Controller
	
	
	==> dmesg <==
	[  +8.276067] overlayfs: idmapped layers are currently not supported
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	[Nov23 08:55] overlayfs: idmapped layers are currently not supported
	[Nov23 08:56] overlayfs: idmapped layers are currently not supported
	[Nov23 08:57] overlayfs: idmapped layers are currently not supported
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	[ +37.440319] overlayfs: idmapped layers are currently not supported
	[Nov23 08:59] overlayfs: idmapped layers are currently not supported
	[Nov23 09:00] overlayfs: idmapped layers are currently not supported
	[ +12.221002] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f202c3fe478cd3e553a0db890621640c851e159694e8421e346372dfd05c53b6] <==
	{"level":"warn","ts":"2025-11-23T09:00:11.116863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.137453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.156505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.181984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.216165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.224168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.256245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.267932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.286096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.309288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.324095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.335861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.352478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.368946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.386320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.403370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.419692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.437753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.464105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.486506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.505237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.526170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.569257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.592633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:11.742698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55206","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:00:23 up  9:42,  0 user,  load average: 4.60, 3.58, 2.89
	Linux newest-cni-261704 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5bc07aa622b4f2cb6c6e329917932f22bf4e7146084f2b0def422d1c54e83a0c] <==
	I1123 09:00:14.474943       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:00:14.534649       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:00:14.534789       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:00:14.534811       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:00:14.534823       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:00:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:00:14.768597       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:00:14.769139       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:00:14.769190       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:00:14.776237       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [880d4ef1f66f609d81a2b6e4dbdc4df6a03f35b8f9d778a7ed11c71849e44600] <==
	I1123 09:00:13.502512       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 09:00:13.503044       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:00:13.507488       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 09:00:13.507589       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 09:00:13.507634       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:00:13.516635       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 09:00:13.516790       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:00:13.532121       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:00:13.544810       1 aggregator.go:171] initial CRD sync complete...
	I1123 09:00:13.544829       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:00:13.544837       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:00:13.544845       1 cache.go:39] Caches are synced for autoregister controller
	E1123 09:00:13.564831       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 09:00:13.731797       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:00:14.063875       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:00:14.270199       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:00:14.442212       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:00:14.547981       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:00:14.663043       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:00:15.083512       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.137.195"}
	I1123 09:00:15.113803       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.144.22"}
	I1123 09:00:16.093941       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:00:16.498719       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:00:16.544698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:00:16.833027       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [6f050244cada0b5b8f4416fbf67c393c75b4f649d2e93d425648834a3cac6d0f] <==
	I1123 09:00:16.080258       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:00:16.082782       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:00:16.082816       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:00:16.087274       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:00:16.090476       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:00:16.090563       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 09:00:16.090601       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 09:00:16.090628       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:00:16.090688       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:00:16.090758       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:00:16.092914       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:00:16.098235       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:00:16.101147       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:00:16.101259       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:00:16.101352       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-261704"
	I1123 09:00:16.101406       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 09:00:16.101451       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:00:16.117057       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:00:16.123860       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:00:16.123884       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:00:16.123890       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:00:16.125063       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:16.138483       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:00:16.143715       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 09:00:16.161111       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [9624a44b6ce5cc2c014b290df02810ce1759d2c0a54189542ed9456c4eb24a2a] <==
	I1123 09:00:15.059527       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:00:15.168963       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:00:15.271711       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:00:15.272792       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 09:00:15.272931       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:00:15.364520       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:00:15.364637       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:00:15.368645       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:00:15.368991       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:00:15.369154       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:15.374031       1 config.go:200] "Starting service config controller"
	I1123 09:00:15.374090       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:00:15.374226       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:00:15.374261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:00:15.374460       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:00:15.374499       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:00:15.375161       1 config.go:309] "Starting node config controller"
	I1123 09:00:15.375265       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:00:15.375297       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:00:15.483310       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:00:15.483346       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:00:15.483391       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3af4aa3938c00ead58f3f54203d3fd9e33f73b03851da51e786f34d10ff67ee9] <==
	I1123 09:00:11.986685       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:00:14.872179       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:00:14.872212       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:14.903958       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 09:00:14.904080       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 09:00:14.904194       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:00:14.904224       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:00:14.904240       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:14.915665       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:14.904251       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:00:14.915801       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:00:15.014899       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 09:00:15.024077       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:15.025008       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.575590     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.601135     732 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.601263     732 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.601296     732 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.604197     732 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: E1123 09:00:13.642252     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-261704\" already exists" pod="kube-system/etcd-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.642449     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: E1123 09:00:13.689459     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-261704\" already exists" pod="kube-system/kube-apiserver-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.689513     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: E1123 09:00:13.732790     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-261704\" already exists" pod="kube-system/kube-controller-manager-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.732849     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: E1123 09:00:13.767650     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-261704\" already exists" pod="kube-system/kube-scheduler-newest-cni-261704"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.962698     732 apiserver.go:52] "Watching apiserver"
	Nov 23 09:00:13 newest-cni-261704 kubelet[732]: I1123 09:00:13.983648     732 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: I1123 09:00:14.050052     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c5f3452-ed50-4a8d-82e3-51abceb3b21b-lib-modules\") pod \"kindnet-k7fsm\" (UID: \"7c5f3452-ed50-4a8d-82e3-51abceb3b21b\") " pod="kube-system/kindnet-k7fsm"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: I1123 09:00:14.050182     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c5f3452-ed50-4a8d-82e3-51abceb3b21b-xtables-lock\") pod \"kindnet-k7fsm\" (UID: \"7c5f3452-ed50-4a8d-82e3-51abceb3b21b\") " pod="kube-system/kindnet-k7fsm"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: I1123 09:00:14.050220     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7c5f3452-ed50-4a8d-82e3-51abceb3b21b-cni-cfg\") pod \"kindnet-k7fsm\" (UID: \"7c5f3452-ed50-4a8d-82e3-51abceb3b21b\") " pod="kube-system/kindnet-k7fsm"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: I1123 09:00:14.050283     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36630050-6d8d-433a-a3bc-77fc44b8484e-xtables-lock\") pod \"kube-proxy-wp8vw\" (UID: \"36630050-6d8d-433a-a3bc-77fc44b8484e\") " pod="kube-system/kube-proxy-wp8vw"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: I1123 09:00:14.050304     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36630050-6d8d-433a-a3bc-77fc44b8484e-lib-modules\") pod \"kube-proxy-wp8vw\" (UID: \"36630050-6d8d-433a-a3bc-77fc44b8484e\") " pod="kube-system/kube-proxy-wp8vw"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: I1123 09:00:14.095794     732 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: W1123 09:00:14.312582     732 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/crio-3b4d9cbd9c0e9c96b56dde65e3d268d50d01ef435d9bd52a6e7f09fd4a8604a9 WatchSource:0}: Error finding container 3b4d9cbd9c0e9c96b56dde65e3d268d50d01ef435d9bd52a6e7f09fd4a8604a9: Status 404 returned error can't find the container with id 3b4d9cbd9c0e9c96b56dde65e3d268d50d01ef435d9bd52a6e7f09fd4a8604a9
	Nov 23 09:00:14 newest-cni-261704 kubelet[732]: W1123 09:00:14.313406     732 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b3bc5f52919994c9d07b6e6df11224fdd1b7497b45a11eb97626b4a617e58772/crio-0d75a72dc2f75698db986d484b53f1b087968b7cabf88ca78e89907e5aebd7f1 WatchSource:0}: Error finding container 0d75a72dc2f75698db986d484b53f1b087968b7cabf88ca78e89907e5aebd7f1: Status 404 returned error can't find the container with id 0d75a72dc2f75698db986d484b53f1b087968b7cabf88ca78e89907e5aebd7f1
	Nov 23 09:00:16 newest-cni-261704 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:00:16 newest-cni-261704 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:00:16 newest-cni-261704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-261704 -n newest-cni-261704
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-261704 -n newest-cni-261704: exit status 2 (554.726016ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-261704 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-mdvx8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jzcrk kubernetes-dashboard-855c9754f9-6xcjk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-261704 describe pod coredns-66bc5c9577-mdvx8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jzcrk kubernetes-dashboard-855c9754f9-6xcjk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-261704 describe pod coredns-66bc5c9577-mdvx8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jzcrk kubernetes-dashboard-855c9754f9-6xcjk: exit status 1 (123.435694ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-mdvx8" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-jzcrk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-6xcjk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-261704 describe pod coredns-66bc5c9577-mdvx8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jzcrk kubernetes-dashboard-855c9754f9-6xcjk: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (8.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-591175 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-591175 --alsologtostderr -v=1: exit status 80 (1.897396142s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-591175 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:01:20.818675 1256685 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:01:20.818806 1256685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:01:20.818816 1256685 out.go:374] Setting ErrFile to fd 2...
	I1123 09:01:20.818822 1256685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:01:20.819071 1256685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 09:01:20.819339 1256685 out.go:368] Setting JSON to false
	I1123 09:01:20.819366 1256685 mustload.go:66] Loading cluster: no-preload-591175
	I1123 09:01:20.819868 1256685 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:01:20.820357 1256685 cli_runner.go:164] Run: docker container inspect no-preload-591175 --format={{.State.Status}}
	I1123 09:01:20.836560 1256685 host.go:66] Checking if "no-preload-591175" exists ...
	I1123 09:01:20.836885 1256685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:01:20.909407 1256685 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 09:01:20.899841209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:01:20.910027 1256685 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-591175 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 09:01:20.913696 1256685 out.go:179] * Pausing node no-preload-591175 ... 
	I1123 09:01:20.916653 1256685 host.go:66] Checking if "no-preload-591175" exists ...
	I1123 09:01:20.916986 1256685 ssh_runner.go:195] Run: systemctl --version
	I1123 09:01:20.917036 1256685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-591175
	I1123 09:01:20.934094 1256685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34557 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/no-preload-591175/id_rsa Username:docker}
	I1123 09:01:21.037798 1256685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:01:21.057608 1256685 pause.go:52] kubelet running: true
	I1123 09:01:21.057688 1256685 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:01:21.298610 1256685 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:01:21.298697 1256685 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:01:21.373757 1256685 cri.go:89] found id: "fbdf54514881b66bf257ad9c157d36d9d46f3a29c186b01a6f64ee63c4de43fb"
	I1123 09:01:21.373776 1256685 cri.go:89] found id: "2fcc0e19109bc84ca2b9d741452c957bdab6dd11b089a54216971f59ea750720"
	I1123 09:01:21.373781 1256685 cri.go:89] found id: "95e135f4cdc6c76c534ff22368e28377e25b471ed736c58f590eae564658328b"
	I1123 09:01:21.373785 1256685 cri.go:89] found id: "e796016163ed388aa9e995b7fbf568cb73db2071a880fa33c913e398ff464229"
	I1123 09:01:21.373788 1256685 cri.go:89] found id: "98061e6f3b0355afb9092940374e4137f051f66db6053d855478a46ce03c472c"
	I1123 09:01:21.373791 1256685 cri.go:89] found id: "9176ef57780eecfb0be6625d611ecda108774756a7a7ba2e04cae7ba6631a68b"
	I1123 09:01:21.373794 1256685 cri.go:89] found id: "157d0e0fd3e72e28588020ec573e4dedd42cd637d9021c7aaf88f84bb1ff9ca6"
	I1123 09:01:21.373797 1256685 cri.go:89] found id: "aebf3ba174ff52b2d9016df7e7c2a73bddd769ac238a51aeefd85b75d890f557"
	I1123 09:01:21.373800 1256685 cri.go:89] found id: "84f5d17f9123dfb226e15a389bc9a5e5b2de8b259f1186f86f2f3673b2895055"
	I1123 09:01:21.373806 1256685 cri.go:89] found id: "0f7817fdeccec014e739b087663b5baa22386f19c93acc1e9b1b90b8b9eea98b"
	I1123 09:01:21.373809 1256685 cri.go:89] found id: "74106f0c2a309342ef590081bd9557bf94fa83268eb9ee5ec4d761dd9cb1c240"
	I1123 09:01:21.373812 1256685 cri.go:89] found id: ""
	I1123 09:01:21.373860 1256685 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:01:21.393393 1256685 retry.go:31] will retry after 322.706005ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:01:21Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:01:21.716956 1256685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:01:21.729555 1256685 pause.go:52] kubelet running: false
	I1123 09:01:21.729661 1256685 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:01:21.900363 1256685 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:01:21.900454 1256685 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:01:21.970040 1256685 cri.go:89] found id: "fbdf54514881b66bf257ad9c157d36d9d46f3a29c186b01a6f64ee63c4de43fb"
	I1123 09:01:21.970100 1256685 cri.go:89] found id: "2fcc0e19109bc84ca2b9d741452c957bdab6dd11b089a54216971f59ea750720"
	I1123 09:01:21.970119 1256685 cri.go:89] found id: "95e135f4cdc6c76c534ff22368e28377e25b471ed736c58f590eae564658328b"
	I1123 09:01:21.970136 1256685 cri.go:89] found id: "e796016163ed388aa9e995b7fbf568cb73db2071a880fa33c913e398ff464229"
	I1123 09:01:21.970155 1256685 cri.go:89] found id: "98061e6f3b0355afb9092940374e4137f051f66db6053d855478a46ce03c472c"
	I1123 09:01:21.970173 1256685 cri.go:89] found id: "9176ef57780eecfb0be6625d611ecda108774756a7a7ba2e04cae7ba6631a68b"
	I1123 09:01:21.970198 1256685 cri.go:89] found id: "157d0e0fd3e72e28588020ec573e4dedd42cd637d9021c7aaf88f84bb1ff9ca6"
	I1123 09:01:21.970293 1256685 cri.go:89] found id: "aebf3ba174ff52b2d9016df7e7c2a73bddd769ac238a51aeefd85b75d890f557"
	I1123 09:01:21.970333 1256685 cri.go:89] found id: "84f5d17f9123dfb226e15a389bc9a5e5b2de8b259f1186f86f2f3673b2895055"
	I1123 09:01:21.970365 1256685 cri.go:89] found id: "0f7817fdeccec014e739b087663b5baa22386f19c93acc1e9b1b90b8b9eea98b"
	I1123 09:01:21.970383 1256685 cri.go:89] found id: "74106f0c2a309342ef590081bd9557bf94fa83268eb9ee5ec4d761dd9cb1c240"
	I1123 09:01:21.970400 1256685 cri.go:89] found id: ""
	I1123 09:01:21.970466 1256685 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:01:21.983712 1256685 retry.go:31] will retry after 379.939217ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:01:21Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:01:22.364366 1256685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:01:22.378255 1256685 pause.go:52] kubelet running: false
	I1123 09:01:22.378360 1256685 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:01:22.561051 1256685 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:01:22.561126 1256685 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:01:22.630392 1256685 cri.go:89] found id: "fbdf54514881b66bf257ad9c157d36d9d46f3a29c186b01a6f64ee63c4de43fb"
	I1123 09:01:22.630415 1256685 cri.go:89] found id: "2fcc0e19109bc84ca2b9d741452c957bdab6dd11b089a54216971f59ea750720"
	I1123 09:01:22.630420 1256685 cri.go:89] found id: "95e135f4cdc6c76c534ff22368e28377e25b471ed736c58f590eae564658328b"
	I1123 09:01:22.630424 1256685 cri.go:89] found id: "e796016163ed388aa9e995b7fbf568cb73db2071a880fa33c913e398ff464229"
	I1123 09:01:22.630427 1256685 cri.go:89] found id: "98061e6f3b0355afb9092940374e4137f051f66db6053d855478a46ce03c472c"
	I1123 09:01:22.630431 1256685 cri.go:89] found id: "9176ef57780eecfb0be6625d611ecda108774756a7a7ba2e04cae7ba6631a68b"
	I1123 09:01:22.630434 1256685 cri.go:89] found id: "157d0e0fd3e72e28588020ec573e4dedd42cd637d9021c7aaf88f84bb1ff9ca6"
	I1123 09:01:22.630438 1256685 cri.go:89] found id: "aebf3ba174ff52b2d9016df7e7c2a73bddd769ac238a51aeefd85b75d890f557"
	I1123 09:01:22.630441 1256685 cri.go:89] found id: "84f5d17f9123dfb226e15a389bc9a5e5b2de8b259f1186f86f2f3673b2895055"
	I1123 09:01:22.630468 1256685 cri.go:89] found id: "0f7817fdeccec014e739b087663b5baa22386f19c93acc1e9b1b90b8b9eea98b"
	I1123 09:01:22.630479 1256685 cri.go:89] found id: "74106f0c2a309342ef590081bd9557bf94fa83268eb9ee5ec4d761dd9cb1c240"
	I1123 09:01:22.630484 1256685 cri.go:89] found id: ""
	I1123 09:01:22.630545 1256685 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:01:22.645404 1256685 out.go:203] 
	W1123 09:01:22.648344 1256685 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:01:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:01:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:01:22.648365 1256685 out.go:285] * 
	* 
	W1123 09:01:22.657402 1256685 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:01:22.660401 1256685 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-591175 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-591175
helpers_test.go:243: (dbg) docker inspect no-preload-591175:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369",
	        "Created": "2025-11-23T08:58:38.098322261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250560,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:00:12.722795342Z",
	            "FinishedAt": "2025-11-23T09:00:11.529118259Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/hostname",
	        "HostsPath": "/var/lib/docker/containers/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/hosts",
	        "LogPath": "/var/lib/docker/containers/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369-json.log",
	        "Name": "/no-preload-591175",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-591175:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-591175",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369",
	                "LowerDir": "/var/lib/docker/overlay2/771f258756c2bbb7a52acc018af18f3945b3a6a6c890b53f5dd366fd3977c014-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/771f258756c2bbb7a52acc018af18f3945b3a6a6c890b53f5dd366fd3977c014/merged",
	                "UpperDir": "/var/lib/docker/overlay2/771f258756c2bbb7a52acc018af18f3945b3a6a6c890b53f5dd366fd3977c014/diff",
	                "WorkDir": "/var/lib/docker/overlay2/771f258756c2bbb7a52acc018af18f3945b3a6a6c890b53f5dd366fd3977c014/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-591175",
	                "Source": "/var/lib/docker/volumes/no-preload-591175/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-591175",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-591175",
	                "name.minikube.sigs.k8s.io": "no-preload-591175",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6718a243beeb99aacf9742136d0eb632fded191cda1d18b423049d24f16ab944",
	            "SandboxKey": "/var/run/docker/netns/6718a243beeb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34557"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34558"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34561"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34559"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34560"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-591175": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:34:c8:d3:57:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5cb890fde481b5761669b16b762b3e0bbd64d2ef935451546915fdbb684d58af",
	                    "EndpointID": "c001ae934474dd29d45b1a976a7659fa2804413ae2a5e62bbce07609d8435232",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-591175",
	                        "14f3744363b8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-591175 -n no-preload-591175
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-591175 -n no-preload-591175: exit status 2 (345.326606ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-591175 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-591175 logs -n 25: (1.364568701s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-262764 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p disable-driver-mounts-880590                                                                                                                                                                                                               │ disable-driver-mounts-880590 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:59 UTC │
	│ image   │ embed-certs-879861 image list --format=json                                                                                                                                                                                                   │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ pause   │ -p embed-certs-879861 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ delete  │ -p embed-certs-879861                                                                                                                                                                                                                         │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p embed-certs-879861                                                                                                                                                                                                                         │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ addons  │ enable metrics-server -p newest-cni-261704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-591175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ stop    │ -p newest-cni-261704 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ stop    │ -p no-preload-591175 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:00 UTC │
	│ addons  │ enable dashboard -p newest-cni-261704 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:00 UTC │
	│ addons  │ enable dashboard -p no-preload-591175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ start   │ -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:01 UTC │
	│ image   │ newest-cni-261704 image list --format=json                                                                                                                                                                                                    │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ pause   │ -p newest-cni-261704 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ delete  │ -p newest-cni-261704                                                                                                                                                                                                                          │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ delete  │ -p newest-cni-261704                                                                                                                                                                                                                          │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ start   │ -p auto-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-082524                  │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ image   │ no-preload-591175 image list --format=json                                                                                                                                                                                                    │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ pause   │ -p no-preload-591175 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:00:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:00:27.174663 1253581 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:27.175297 1253581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:27.175333 1253581 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:27.175356 1253581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:27.175650 1253581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 09:00:27.176135 1253581 out.go:368] Setting JSON to false
	I1123 09:00:27.177148 1253581 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34973,"bootTime":1763853455,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 09:00:27.177250 1253581 start.go:143] virtualization:  
	I1123 09:00:27.181465 1253581 out.go:179] * [auto-082524] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:00:27.185166 1253581 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 09:00:27.185245 1253581 notify.go:221] Checking for updates...
	I1123 09:00:27.189150 1253581 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:00:27.192453 1253581 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 09:00:27.195534 1253581 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 09:00:27.198742 1253581 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:00:27.201963 1253581 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:00:22.321008 1250435 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:00:22.321030 1250435 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:00:22.373481 1250435 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:00:22.373503 1250435 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:00:22.424313 1250435 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:00:22.424335 1250435 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:00:22.455150 1250435 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:00:22.455247 1250435 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:00:22.486676 1250435 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:00:22.486698 1250435 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:00:22.513499 1250435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:00:27.205619 1253581 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:27.205770 1253581 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:00:27.252466 1253581 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:00:27.252647 1253581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:00:27.368318 1253581 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 09:00:27.353022416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:00:27.368430 1253581 docker.go:319] overlay module found
	I1123 09:00:27.371556 1253581 out.go:179] * Using the docker driver based on user configuration
	I1123 09:00:27.374481 1253581 start.go:309] selected driver: docker
	I1123 09:00:27.374498 1253581 start.go:927] validating driver "docker" against <nil>
	I1123 09:00:27.374511 1253581 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:00:27.375248 1253581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:00:27.477240 1253581 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 09:00:27.467330761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:00:27.477403 1253581 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:00:27.477630 1253581 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:00:27.480701 1253581 out.go:179] * Using Docker driver with root privileges
	I1123 09:00:27.483471 1253581 cni.go:84] Creating CNI manager for ""
	I1123 09:00:27.483543 1253581 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:00:27.483556 1253581 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:00:27.483633 1253581 start.go:353] cluster config:
	{Name:auto-082524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-082524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1123 09:00:27.486814 1253581 out.go:179] * Starting "auto-082524" primary control-plane node in "auto-082524" cluster
	I1123 09:00:27.489763 1253581 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:00:27.492583 1253581 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:00:27.495489 1253581 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:00:27.495556 1253581 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 09:00:27.495571 1253581 cache.go:65] Caching tarball of preloaded images
	I1123 09:00:27.495652 1253581 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:00:27.495667 1253581 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:00:27.495772 1253581 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/config.json ...
	I1123 09:00:27.495795 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/config.json: {Name:mk8307308d35f5a8dd72a039406096dc09879244 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:27.495948 1253581 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:00:27.514400 1253581 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:00:27.514424 1253581 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:00:27.514442 1253581 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:00:27.514478 1253581 start.go:360] acquireMachinesLock for auto-082524: {Name:mkbda3902800cc164468f93c9a878ecedc5d1cbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:27.514580 1253581 start.go:364] duration metric: took 81.802µs to acquireMachinesLock for "auto-082524"
	I1123 09:00:27.514616 1253581 start.go:93] Provisioning new machine with config: &{Name:auto-082524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-082524 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:00:27.514685 1253581 start.go:125] createHost starting for "" (driver="docker")
	I1123 09:00:28.463020 1250435 node_ready.go:49] node "no-preload-591175" is "Ready"
	I1123 09:00:28.463046 1250435 node_ready.go:38] duration metric: took 6.501185019s for node "no-preload-591175" to be "Ready" ...
	I1123 09:00:28.463060 1250435 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:00:28.463116 1250435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:00:28.698143 1250435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.785689937s)
	I1123 09:00:31.786472 1250435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.862247196s)
	I1123 09:00:31.786589 1250435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.273056322s)
	I1123 09:00:31.786691 1250435 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.323564019s)
	I1123 09:00:31.786710 1250435 api_server.go:72] duration metric: took 10.392698363s to wait for apiserver process to appear ...
	I1123 09:00:31.786716 1250435 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:00:31.786737 1250435 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 09:00:31.794310 1250435 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-591175 addons enable metrics-server
	
	I1123 09:00:31.814332 1250435 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 09:00:27.518066 1253581 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:00:27.518317 1253581 start.go:159] libmachine.API.Create for "auto-082524" (driver="docker")
	I1123 09:00:27.518352 1253581 client.go:173] LocalClient.Create starting
	I1123 09:00:27.518435 1253581 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem
	I1123 09:00:27.518473 1253581 main.go:143] libmachine: Decoding PEM data...
	I1123 09:00:27.518495 1253581 main.go:143] libmachine: Parsing certificate...
	I1123 09:00:27.518559 1253581 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem
	I1123 09:00:27.518584 1253581 main.go:143] libmachine: Decoding PEM data...
	I1123 09:00:27.518600 1253581 main.go:143] libmachine: Parsing certificate...
	I1123 09:00:27.519092 1253581 cli_runner.go:164] Run: docker network inspect auto-082524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:00:27.540842 1253581 cli_runner.go:211] docker network inspect auto-082524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:00:27.540933 1253581 network_create.go:284] running [docker network inspect auto-082524] to gather additional debugging logs...
	I1123 09:00:27.540966 1253581 cli_runner.go:164] Run: docker network inspect auto-082524
	W1123 09:00:27.560831 1253581 cli_runner.go:211] docker network inspect auto-082524 returned with exit code 1
	I1123 09:00:27.560863 1253581 network_create.go:287] error running [docker network inspect auto-082524]: docker network inspect auto-082524: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-082524 not found
	I1123 09:00:27.560878 1253581 network_create.go:289] output of [docker network inspect auto-082524]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-082524 not found
	
	** /stderr **
	I1123 09:00:27.560969 1253581 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:00:27.581238 1253581 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-32d396d9f7df IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:9b:29:4a:5c:ab} reservation:<nil>}
	I1123 09:00:27.581572 1253581 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-859c97accd92 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:ea:cf:62:f4:f8} reservation:<nil>}
	I1123 09:00:27.581889 1253581 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-50e966d7b39a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:1d:b6:b9:b9:ef} reservation:<nil>}
	I1123 09:00:27.582313 1253581 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a19150}
	I1123 09:00:27.582336 1253581 network_create.go:124] attempt to create docker network auto-082524 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 09:00:27.582395 1253581 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-082524 auto-082524
	I1123 09:00:27.678067 1253581 network_create.go:108] docker network auto-082524 192.168.76.0/24 created
	I1123 09:00:27.678102 1253581 kic.go:121] calculated static IP "192.168.76.2" for the "auto-082524" container
	I1123 09:00:27.678172 1253581 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:00:27.694493 1253581 cli_runner.go:164] Run: docker volume create auto-082524 --label name.minikube.sigs.k8s.io=auto-082524 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:00:27.711347 1253581 oci.go:103] Successfully created a docker volume auto-082524
	I1123 09:00:27.711430 1253581 cli_runner.go:164] Run: docker run --rm --name auto-082524-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-082524 --entrypoint /usr/bin/test -v auto-082524:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:00:28.408741 1253581 oci.go:107] Successfully prepared a docker volume auto-082524
	I1123 09:00:28.408798 1253581 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:00:28.408807 1253581 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:00:28.408870 1253581 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-082524:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 09:00:31.828410 1250435 addons.go:530] duration metric: took 10.433981611s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 09:00:31.863062 1250435 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 09:00:31.871386 1250435 api_server.go:141] control plane version: v1.34.1
	I1123 09:00:31.871422 1250435 api_server.go:131] duration metric: took 84.695167ms to wait for apiserver health ...
	I1123 09:00:31.871433 1250435 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:00:31.886982 1250435 system_pods.go:59] 8 kube-system pods found
	I1123 09:00:31.887028 1250435 system_pods.go:61] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:31.887037 1250435 system_pods.go:61] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 09:00:31.887043 1250435 system_pods.go:61] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 09:00:31.887051 1250435 system_pods.go:61] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:00:31.887059 1250435 system_pods.go:61] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:00:31.887063 1250435 system_pods.go:61] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 09:00:31.887072 1250435 system_pods.go:61] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:00:31.887076 1250435 system_pods.go:61] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Running
	I1123 09:00:31.887081 1250435 system_pods.go:74] duration metric: took 15.644046ms to wait for pod list to return data ...
	I1123 09:00:31.887094 1250435 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:00:31.919638 1250435 default_sa.go:45] found service account: "default"
	I1123 09:00:31.919674 1250435 default_sa.go:55] duration metric: took 32.572446ms for default service account to be created ...
	I1123 09:00:31.919686 1250435 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:00:31.946680 1250435 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:31.946720 1250435 system_pods.go:89] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:31.946729 1250435 system_pods.go:89] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 09:00:31.946735 1250435 system_pods.go:89] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 09:00:31.946742 1250435 system_pods.go:89] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:00:31.946748 1250435 system_pods.go:89] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:00:31.946753 1250435 system_pods.go:89] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 09:00:31.946765 1250435 system_pods.go:89] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:00:31.946771 1250435 system_pods.go:89] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Running
	I1123 09:00:31.946780 1250435 system_pods.go:126] duration metric: took 27.087112ms to wait for k8s-apps to be running ...
	I1123 09:00:31.946791 1250435 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:00:31.946849 1250435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:00:31.963043 1250435 system_svc.go:56] duration metric: took 16.240583ms WaitForService to wait for kubelet
	I1123 09:00:31.963125 1250435 kubeadm.go:587] duration metric: took 10.569111374s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:00:31.963160 1250435 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:00:32.003125 1250435 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:00:32.003267 1250435 node_conditions.go:123] node cpu capacity is 2
	I1123 09:00:32.003301 1250435 node_conditions.go:105] duration metric: took 40.040256ms to run NodePressure ...
	I1123 09:00:32.003340 1250435 start.go:242] waiting for startup goroutines ...
	I1123 09:00:32.003365 1250435 start.go:247] waiting for cluster config update ...
	I1123 09:00:32.003391 1250435 start.go:256] writing updated cluster config ...
	I1123 09:00:32.004423 1250435 ssh_runner.go:195] Run: rm -f paused
	I1123 09:00:32.010008 1250435 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:00:32.034627 1250435 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zwlsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:33.271733 1253581 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-082524:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.862830407s)
	I1123 09:00:33.271770 1253581 kic.go:203] duration metric: took 4.862958435s to extract preloaded images to volume ...
	W1123 09:00:33.271909 1253581 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 09:00:33.272024 1253581 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:00:33.329240 1253581 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-082524 --name auto-082524 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-082524 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-082524 --network auto-082524 --ip 192.168.76.2 --volume auto-082524:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:00:33.728027 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Running}}
	I1123 09:00:33.751691 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Status}}
	I1123 09:00:33.780211 1253581 cli_runner.go:164] Run: docker exec auto-082524 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:00:33.842067 1253581 oci.go:144] the created container "auto-082524" has a running status.
	I1123 09:00:33.842094 1253581 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa...
	I1123 09:00:34.410946 1253581 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:00:34.453980 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Status}}
	I1123 09:00:34.489690 1253581 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:00:34.489721 1253581 kic_runner.go:114] Args: [docker exec --privileged auto-082524 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:00:34.579040 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Status}}
	I1123 09:00:34.605880 1253581 machine.go:94] provisionDockerMachine start ...
	I1123 09:00:34.605968 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:34.637108 1253581 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:34.637470 1253581 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34562 <nil> <nil>}
	I1123 09:00:34.637486 1253581 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:00:34.638043 1253581 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60518->127.0.0.1:34562: read: connection reset by peer
	W1123 09:00:34.053493 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:00:36.540278 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:00:37.799235 1253581 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-082524
	
	I1123 09:00:37.799261 1253581 ubuntu.go:182] provisioning hostname "auto-082524"
	I1123 09:00:37.799323 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:37.829680 1253581 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:37.829992 1253581 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34562 <nil> <nil>}
	I1123 09:00:37.830009 1253581 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-082524 && echo "auto-082524" | sudo tee /etc/hostname
	I1123 09:00:38.004218 1253581 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-082524
	
	I1123 09:00:38.004323 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:38.032213 1253581 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:38.032538 1253581 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34562 <nil> <nil>}
	I1123 09:00:38.032555 1253581 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-082524' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-082524/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-082524' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:00:38.196629 1253581 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:00:38.196703 1253581 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 09:00:38.196748 1253581 ubuntu.go:190] setting up certificates
	I1123 09:00:38.196789 1253581 provision.go:84] configureAuth start
	I1123 09:00:38.196871 1253581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-082524
	I1123 09:00:38.216617 1253581 provision.go:143] copyHostCerts
	I1123 09:00:38.216673 1253581 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 09:00:38.216681 1253581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 09:00:38.216746 1253581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 09:00:38.216836 1253581 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 09:00:38.216842 1253581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 09:00:38.216871 1253581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 09:00:38.216928 1253581 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 09:00:38.216932 1253581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 09:00:38.216955 1253581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 09:00:38.217006 1253581 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.auto-082524 san=[127.0.0.1 192.168.76.2 auto-082524 localhost minikube]
	I1123 09:00:38.433992 1253581 provision.go:177] copyRemoteCerts
	I1123 09:00:38.434099 1253581 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:00:38.434181 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:38.451940 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:00:38.581808 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1123 09:00:38.607485 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:00:38.632512 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:00:38.653189 1253581 provision.go:87] duration metric: took 456.365813ms to configureAuth
	I1123 09:00:38.653265 1253581 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:00:38.653496 1253581 config.go:182] Loaded profile config "auto-082524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:38.653652 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:38.672512 1253581 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:38.672836 1253581 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34562 <nil> <nil>}
	I1123 09:00:38.672850 1253581 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:00:39.092799 1253581 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:00:39.092909 1253581 machine.go:97] duration metric: took 4.487006469s to provisionDockerMachine
	I1123 09:00:39.092935 1253581 client.go:176] duration metric: took 11.574572231s to LocalClient.Create
	I1123 09:00:39.092987 1253581 start.go:167] duration metric: took 11.574652401s to libmachine.API.Create "auto-082524"
	I1123 09:00:39.093012 1253581 start.go:293] postStartSetup for "auto-082524" (driver="docker")
	I1123 09:00:39.093038 1253581 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:00:39.093127 1253581 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:00:39.093198 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:39.120709 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:00:39.239958 1253581 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:00:39.244171 1253581 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:00:39.244200 1253581 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:00:39.244212 1253581 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 09:00:39.244269 1253581 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 09:00:39.244359 1253581 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 09:00:39.244474 1253581 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:00:39.259041 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 09:00:39.288930 1253581 start.go:296] duration metric: took 195.889012ms for postStartSetup
	I1123 09:00:39.289290 1253581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-082524
	I1123 09:00:39.321165 1253581 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/config.json ...
	I1123 09:00:39.321442 1253581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:00:39.321491 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:39.351353 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:00:39.472116 1253581 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:00:39.477644 1253581 start.go:128] duration metric: took 11.962944239s to createHost
	I1123 09:00:39.477674 1253581 start.go:83] releasing machines lock for "auto-082524", held for 11.963080809s
	I1123 09:00:39.477748 1253581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-082524
	I1123 09:00:39.497515 1253581 ssh_runner.go:195] Run: cat /version.json
	I1123 09:00:39.497569 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:39.497822 1253581 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:00:39.497879 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:39.524209 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:00:39.551176 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:00:39.645267 1253581 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:39.778359 1253581 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:00:39.856101 1253581 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:00:39.862127 1253581 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:00:39.862200 1253581 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:00:39.903139 1253581 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 09:00:39.903241 1253581 start.go:496] detecting cgroup driver to use...
	I1123 09:00:39.903287 1253581 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:00:39.903370 1253581 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:00:39.926031 1253581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:00:39.947207 1253581 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:00:39.947270 1253581 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:00:39.968057 1253581 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:00:39.991554 1253581 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:00:40.199394 1253581 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:00:40.394996 1253581 docker.go:234] disabling docker service ...
	I1123 09:00:40.395059 1253581 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:00:40.420115 1253581 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:00:40.438246 1253581 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:00:40.606105 1253581 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:00:40.766591 1253581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:00:40.786631 1253581 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:00:40.801834 1253581 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:00:40.801910 1253581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.810982 1253581 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:00:40.811047 1253581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.820627 1253581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.829685 1253581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.838412 1253581 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:00:40.847813 1253581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.857615 1253581 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.883339 1253581 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.892764 1253581 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:00:40.901716 1253581 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:00:40.910056 1253581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:41.080514 1253581 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:00:41.770646 1253581 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:00:41.770722 1253581 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:00:41.775554 1253581 start.go:564] Will wait 60s for crictl version
	I1123 09:00:41.775618 1253581 ssh_runner.go:195] Run: which crictl
	I1123 09:00:41.780031 1253581 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:00:41.821712 1253581 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:00:41.821804 1253581 ssh_runner.go:195] Run: crio --version
	I1123 09:00:41.855892 1253581 ssh_runner.go:195] Run: crio --version
	I1123 09:00:41.892530 1253581 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:00:41.895579 1253581 cli_runner.go:164] Run: docker network inspect auto-082524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:00:41.912867 1253581 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 09:00:41.917333 1253581 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:00:41.932843 1253581 kubeadm.go:884] updating cluster {Name:auto-082524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-082524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:00:41.932970 1253581 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:00:41.933025 1253581 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:00:41.975383 1253581 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:00:41.975404 1253581 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:00:41.975459 1253581 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:00:42.036417 1253581 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:00:42.036491 1253581 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:00:42.036526 1253581 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 09:00:42.036686 1253581 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-082524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-082524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:00:42.036809 1253581 ssh_runner.go:195] Run: crio config
	I1123 09:00:42.167256 1253581 cni.go:84] Creating CNI manager for ""
	I1123 09:00:42.167347 1253581 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:00:42.167389 1253581 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:00:42.167447 1253581 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-082524 NodeName:auto-082524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:00:42.167778 1253581 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-082524"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:00:42.167931 1253581 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	W1123 09:00:38.541601 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:00:40.543850 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:00:42.187599 1253581 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:00:42.187768 1253581 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:00:42.201507 1253581 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1123 09:00:42.225027 1253581 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:00:42.255037 1253581 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1123 09:00:42.275241 1253581 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:00:42.282345 1253581 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:00:42.297897 1253581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:42.461706 1253581 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:00:42.483152 1253581 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524 for IP: 192.168.76.2
	I1123 09:00:42.483252 1253581 certs.go:195] generating shared ca certs ...
	I1123 09:00:42.483283 1253581 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:42.483514 1253581 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 09:00:42.483608 1253581 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 09:00:42.483650 1253581 certs.go:257] generating profile certs ...
	I1123 09:00:42.483740 1253581 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.key
	I1123 09:00:42.483771 1253581 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.crt with IP's: []
	I1123 09:00:42.754685 1253581 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.crt ...
	I1123 09:00:42.754764 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.crt: {Name:mkb5a2df2d6fc7d1c2c79cc42d5f3e5c1ce1431d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:42.754982 1253581 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.key ...
	I1123 09:00:42.755017 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.key: {Name:mk0cb10441f015badf7f4250625e61214d3ef0c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:42.755161 1253581 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.key.0c00d17a
	I1123 09:00:42.755230 1253581 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.crt.0c00d17a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 09:00:42.976701 1253581 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.crt.0c00d17a ...
	I1123 09:00:42.976770 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.crt.0c00d17a: {Name:mk12b357555e3b98ad8a5e031be2a7c68f8dbaff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:42.976960 1253581 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.key.0c00d17a ...
	I1123 09:00:42.976996 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.key.0c00d17a: {Name:mkd82361e7134cbeb1f8451263c94b7d64c8d187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:42.977133 1253581 certs.go:382] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.crt.0c00d17a -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.crt
	I1123 09:00:42.977252 1253581 certs.go:386] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.key.0c00d17a -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.key
	I1123 09:00:42.977370 1253581 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.key
	I1123 09:00:42.977408 1253581 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.crt with IP's: []
	I1123 09:00:43.073239 1253581 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.crt ...
	I1123 09:00:43.073270 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.crt: {Name:mk65602301f27ead6e2f766b14e4c70d5dcc7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:43.073459 1253581 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.key ...
	I1123 09:00:43.073475 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.key: {Name:mkd414626ed6d88527cbb7f9a9a23e8ea98a0db8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:43.073714 1253581 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 09:00:43.073780 1253581 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 09:00:43.073795 1253581 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:00:43.073848 1253581 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:00:43.073892 1253581 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:00:43.073933 1253581 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 09:00:43.073999 1253581 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 09:00:43.074599 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:00:43.096166 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:00:43.115695 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:00:43.137353 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 09:00:43.156779 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1123 09:00:43.176169 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:00:43.197918 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:00:43.217494 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:00:43.236423 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:00:43.255644 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 09:00:43.274099 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 09:00:43.295858 1253581 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:00:43.309901 1253581 ssh_runner.go:195] Run: openssl version
	I1123 09:00:43.319137 1253581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:00:43.328614 1253581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:00:43.333046 1253581 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:00:43.333140 1253581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:00:43.392187 1253581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:00:43.402528 1253581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 09:00:43.422210 1253581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 09:00:43.426825 1253581 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 09:00:43.426918 1253581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 09:00:43.508095 1253581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 09:00:43.522063 1253581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 09:00:43.531144 1253581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 09:00:43.535325 1253581 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 09:00:43.535421 1253581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 09:00:43.576875 1253581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:00:43.587718 1253581 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:00:43.592261 1253581 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:00:43.592343 1253581 kubeadm.go:401] StartCluster: {Name:auto-082524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-082524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:00:43.592429 1253581 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:00:43.592528 1253581 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:00:43.621353 1253581 cri.go:89] found id: ""
	I1123 09:00:43.621450 1253581 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:00:43.633005 1253581 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:00:43.644389 1253581 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:00:43.644473 1253581 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:00:43.659438 1253581 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:00:43.659458 1253581 kubeadm.go:158] found existing configuration files:
	
	I1123 09:00:43.659543 1253581 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 09:00:43.668585 1253581 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:00:43.668672 1253581 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:00:43.676719 1253581 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 09:00:43.685454 1253581 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:00:43.685547 1253581 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:00:43.693319 1253581 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 09:00:43.701814 1253581 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:00:43.701909 1253581 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:00:43.709594 1253581 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 09:00:43.718338 1253581 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:00:43.718432 1253581 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:00:43.726148 1253581 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:00:43.788657 1253581 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 09:00:43.789219 1253581 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 09:00:43.830497 1253581 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 09:00:43.830608 1253581 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 09:00:43.830702 1253581 kubeadm.go:319] OS: Linux
	I1123 09:00:43.830797 1253581 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 09:00:43.830873 1253581 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 09:00:43.830957 1253581 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 09:00:43.831038 1253581 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 09:00:43.831118 1253581 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 09:00:43.831218 1253581 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 09:00:43.831291 1253581 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 09:00:43.831360 1253581 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 09:00:43.831463 1253581 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 09:00:43.916383 1253581 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 09:00:43.916551 1253581 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 09:00:43.916659 1253581 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 09:00:43.927545 1253581 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 09:00:43.936289 1253581 out.go:252]   - Generating certificates and keys ...
	I1123 09:00:43.936390 1253581 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 09:00:43.936468 1253581 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 09:00:44.530191 1253581 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 09:00:45.943987 1253581 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 09:00:46.826249 1253581 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 09:00:46.979457 1253581 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1123 09:00:43.041182 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:00:45.042115 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:00:47.458716 1253581 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 09:00:47.459125 1253581 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-082524 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 09:00:47.998507 1253581 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 09:00:47.998645 1253581 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-082524 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 09:00:48.380812 1253581 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:00:48.511077 1253581 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 09:00:48.854434 1253581 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:00:48.854735 1253581 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:00:49.288584 1253581 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:00:49.714490 1253581 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:00:50.098139 1253581 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:00:50.567609 1253581 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:00:50.720479 1253581 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:00:50.721048 1253581 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:00:50.723632 1253581 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:00:50.727304 1253581 out.go:252]   - Booting up control plane ...
	I1123 09:00:50.727406 1253581 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:00:50.727484 1253581 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:00:50.727551 1253581 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:00:50.742932 1253581 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:00:50.743051 1253581 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:00:50.749874 1253581 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:00:50.750436 1253581 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:00:50.750683 1253581 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:00:50.891840 1253581 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:00:50.891984 1253581 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 09:00:51.892989 1253581 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001634135s
	I1123 09:00:51.896672 1253581 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 09:00:51.896769 1253581 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 09:00:51.897119 1253581 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 09:00:51.897212 1253581 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1123 09:00:47.541879 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:00:49.543132 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:00:52.041708 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:00:56.478046 1253581 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.580993089s
	I1123 09:00:57.020571 1253581 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.123854904s
	W1123 09:00:54.049554 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:00:56.540484 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:00:57.898604 1253581 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001843493s
	I1123 09:00:57.923900 1253581 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:00:57.946691 1253581 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:00:57.959135 1253581 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:00:57.959373 1253581 kubeadm.go:319] [mark-control-plane] Marking the node auto-082524 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:00:57.974900 1253581 kubeadm.go:319] [bootstrap-token] Using token: hg5tsc.npysbiukpzp0hebw
	I1123 09:00:57.977791 1253581 out.go:252]   - Configuring RBAC rules ...
	I1123 09:00:57.977923 1253581 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:00:57.981631 1253581 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:00:57.989617 1253581 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:00:57.993491 1253581 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:00:58.002282 1253581 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:00:58.011263 1253581 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:00:58.309352 1253581 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:00:58.746681 1253581 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:00:59.309119 1253581 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:00:59.312413 1253581 kubeadm.go:319] 
	I1123 09:00:59.312490 1253581 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:00:59.312495 1253581 kubeadm.go:319] 
	I1123 09:00:59.312581 1253581 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:00:59.312586 1253581 kubeadm.go:319] 
	I1123 09:00:59.312611 1253581 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:00:59.312670 1253581 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:00:59.312726 1253581 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:00:59.312730 1253581 kubeadm.go:319] 
	I1123 09:00:59.312784 1253581 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:00:59.312788 1253581 kubeadm.go:319] 
	I1123 09:00:59.312835 1253581 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:00:59.312840 1253581 kubeadm.go:319] 
	I1123 09:00:59.312892 1253581 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:00:59.312974 1253581 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:00:59.313044 1253581 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:00:59.313047 1253581 kubeadm.go:319] 
	I1123 09:00:59.313132 1253581 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:00:59.313211 1253581 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:00:59.313215 1253581 kubeadm.go:319] 
	I1123 09:00:59.313299 1253581 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hg5tsc.npysbiukpzp0hebw \
	I1123 09:00:59.313403 1253581 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb \
	I1123 09:00:59.313423 1253581 kubeadm.go:319] 	--control-plane 
	I1123 09:00:59.313435 1253581 kubeadm.go:319] 
	I1123 09:00:59.313521 1253581 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:00:59.313526 1253581 kubeadm.go:319] 
	I1123 09:00:59.313608 1253581 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hg5tsc.npysbiukpzp0hebw \
	I1123 09:00:59.313710 1253581 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb 
	I1123 09:00:59.318673 1253581 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 09:00:59.318920 1253581 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 09:00:59.319033 1253581 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:00:59.319122 1253581 cni.go:84] Creating CNI manager for ""
	I1123 09:00:59.319137 1253581 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:00:59.324158 1253581 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:00:59.326936 1253581 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:00:59.331618 1253581 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:00:59.331639 1253581 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:00:59.344288 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:01:00.151076 1253581 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:01:00.151265 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-082524 minikube.k8s.io/updated_at=2025_11_23T09_01_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=auto-082524 minikube.k8s.io/primary=true
	I1123 09:01:00.151265 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:00.508403 1253581 ops.go:34] apiserver oom_adj: -16
	I1123 09:01:00.508439 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:01.009349 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:01.509416 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:02.008481 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1123 09:00:58.541297 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:01:01.040584 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:01:02.508969 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:03.009512 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:03.508753 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:04.008553 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:04.508546 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:04.678730 1253581 kubeadm.go:1114] duration metric: took 4.527617369s to wait for elevateKubeSystemPrivileges
	I1123 09:01:04.678763 1253581 kubeadm.go:403] duration metric: took 21.086424458s to StartCluster
	I1123 09:01:04.678780 1253581 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:01:04.678840 1253581 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 09:01:04.679873 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:01:04.680098 1253581 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:01:04.680183 1253581 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:01:04.680410 1253581 config.go:182] Loaded profile config "auto-082524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:01:04.680451 1253581 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:01:04.680515 1253581 addons.go:70] Setting storage-provisioner=true in profile "auto-082524"
	I1123 09:01:04.680530 1253581 addons.go:239] Setting addon storage-provisioner=true in "auto-082524"
	I1123 09:01:04.680556 1253581 host.go:66] Checking if "auto-082524" exists ...
	I1123 09:01:04.681036 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Status}}
	I1123 09:01:04.681503 1253581 addons.go:70] Setting default-storageclass=true in profile "auto-082524"
	I1123 09:01:04.681527 1253581 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-082524"
	I1123 09:01:04.681839 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Status}}
	I1123 09:01:04.683461 1253581 out.go:179] * Verifying Kubernetes components...
	I1123 09:01:04.686614 1253581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:01:04.721303 1253581 addons.go:239] Setting addon default-storageclass=true in "auto-082524"
	I1123 09:01:04.721341 1253581 host.go:66] Checking if "auto-082524" exists ...
	I1123 09:01:04.721796 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Status}}
	I1123 09:01:04.729054 1253581 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:01:04.734485 1253581 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:01:04.734509 1253581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:01:04.734572 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:01:04.756168 1253581 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:01:04.756194 1253581 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:01:04.756302 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:01:04.805057 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:01:04.806551 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:01:05.208872 1253581 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:01:05.216571 1253581 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:01:05.216672 1253581 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:01:05.257082 1253581 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:01:05.656562 1253581 node_ready.go:35] waiting up to 15m0s for node "auto-082524" to be "Ready" ...
	I1123 09:01:05.656796 1253581 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 09:01:05.963123 1253581 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 09:01:05.965999 1253581 addons.go:530] duration metric: took 1.285536743s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 09:01:06.160766 1253581 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-082524" context rescaled to 1 replicas
	W1123 09:01:03.541571 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:01:06.040133 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:01:07.542102 1250435 pod_ready.go:94] pod "coredns-66bc5c9577-zwlsw" is "Ready"
	I1123 09:01:07.542131 1250435 pod_ready.go:86] duration metric: took 35.507438116s for pod "coredns-66bc5c9577-zwlsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.544681 1250435 pod_ready.go:83] waiting for pod "etcd-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.550183 1250435 pod_ready.go:94] pod "etcd-no-preload-591175" is "Ready"
	I1123 09:01:07.550213 1250435 pod_ready.go:86] duration metric: took 5.505879ms for pod "etcd-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.552219 1250435 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.557404 1250435 pod_ready.go:94] pod "kube-apiserver-no-preload-591175" is "Ready"
	I1123 09:01:07.557440 1250435 pod_ready.go:86] duration metric: took 5.189359ms for pod "kube-apiserver-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.559641 1250435 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.739120 1250435 pod_ready.go:94] pod "kube-controller-manager-no-preload-591175" is "Ready"
	I1123 09:01:07.739243 1250435 pod_ready.go:86] duration metric: took 179.577384ms for pod "kube-controller-manager-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.940117 1250435 pod_ready.go:83] waiting for pod "kube-proxy-rblwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.338393 1250435 pod_ready.go:94] pod "kube-proxy-rblwh" is "Ready"
	I1123 09:01:08.338430 1250435 pod_ready.go:86] duration metric: took 398.278599ms for pod "kube-proxy-rblwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.538441 1250435 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.939165 1250435 pod_ready.go:94] pod "kube-scheduler-no-preload-591175" is "Ready"
	I1123 09:01:08.939261 1250435 pod_ready.go:86] duration metric: took 400.776941ms for pod "kube-scheduler-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.939275 1250435 pod_ready.go:40] duration metric: took 36.929188288s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:09.012877 1250435 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:01:09.016181 1250435 out.go:179] * Done! kubectl is now configured to use "no-preload-591175" cluster and "default" namespace by default
	W1123 09:01:07.660267 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	W1123 09:01:10.159933 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	W1123 09:01:12.160427 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	W1123 09:01:14.659325 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	W1123 09:01:16.659625 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	W1123 09:01:19.160082 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	W1123 09:01:21.659925 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.280369871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.287752907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.288737544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.308452202Z" level=info msg="Created container 0f7817fdeccec014e739b087663b5baa22386f19c93acc1e9b1b90b8b9eea98b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2/dashboard-metrics-scraper" id=286d76ff-e87c-4588-a857-f52c8b15ca32 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.309504267Z" level=info msg="Starting container: 0f7817fdeccec014e739b087663b5baa22386f19c93acc1e9b1b90b8b9eea98b" id=524462fc-0e9c-4cc2-a155-fefac8fc8c35 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.31184953Z" level=info msg="Started container" PID=1640 containerID=0f7817fdeccec014e739b087663b5baa22386f19c93acc1e9b1b90b8b9eea98b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2/dashboard-metrics-scraper id=524462fc-0e9c-4cc2-a155-fefac8fc8c35 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8859b9c0184a85fa870afe83654c40ea3f6cddae4430369570eeccef4c2fb4a1
	Nov 23 09:01:04 no-preload-591175 conmon[1638]: conmon 0f7817fdeccec014e739 <ninfo>: container 1640 exited with status 1
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.543858522Z" level=info msg="Removing container: 536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430" id=13a80dcd-eeb0-43d4-a488-91a0bb67388c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.553324551Z" level=info msg="Error loading conmon cgroup of container 536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430: cgroup deleted" id=13a80dcd-eeb0-43d4-a488-91a0bb67388c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.560664553Z" level=info msg="Removed container 536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2/dashboard-metrics-scraper" id=13a80dcd-eeb0-43d4-a488-91a0bb67388c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.524235555Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.531519944Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.531555881Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.531578961Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.534848424Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.534889768Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.534916336Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.538095077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.538241312Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.538275904Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.541423557Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.541560447Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.541596532Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.545785509Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.545937735Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0f7817fdeccec       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago       Exited              dashboard-metrics-scraper   2                   8859b9c0184a8       dashboard-metrics-scraper-6ffb444bf9-tgnd2   kubernetes-dashboard
	fbdf54514881b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago       Running             storage-provisioner         2                   1b729bfa6a0a2       storage-provisioner                          kube-system
	74106f0c2a309       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   4182ebc0e8626       kubernetes-dashboard-855c9754f9-pjsjj        kubernetes-dashboard
	2fcc0e19109bc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   47ebb33775180       coredns-66bc5c9577-zwlsw                     kube-system
	8c822177d7824       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   866be0d2a21b5       busybox                                      default
	95e135f4cdc6c       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago       Exited              storage-provisioner         1                   1b729bfa6a0a2       storage-provisioner                          kube-system
	e796016163ed3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   5e7665d9ae55b       kube-proxy-rblwh                             kube-system
	98061e6f3b035       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   8cb02715115d4       kindnet-v65j2                                kube-system
	9176ef57780ee       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   48d249b39a2f5       kube-controller-manager-no-preload-591175    kube-system
	157d0e0fd3e72       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   76aeb73f60348       kube-apiserver-no-preload-591175             kube-system
	aebf3ba174ff5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   b637466a49a97       etcd-no-preload-591175                       kube-system
	84f5d17f9123d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   2e6d4f1213e5b       kube-scheduler-no-preload-591175             kube-system
	
	
	==> coredns [2fcc0e19109bc84ca2b9d741452c957bdab6dd11b089a54216971f59ea750720] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36412 - 20243 "HINFO IN 7023655903771878389.6344257054590995337. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012916968s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-591175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-591175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=no-preload-591175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_59_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:59:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-591175
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:01:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:00:59 +0000   Sun, 23 Nov 2025 08:59:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:00:59 +0000   Sun, 23 Nov 2025 08:59:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:00:59 +0000   Sun, 23 Nov 2025 08:59:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:00:59 +0000   Sun, 23 Nov 2025 08:59:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-591175
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                f436885f-b4ec-44fe-a494-6bb1784496fe
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-zwlsw                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     116s
	  kube-system                 etcd-no-preload-591175                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-v65j2                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-no-preload-591175              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-no-preload-591175     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-rblwh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-no-preload-591175              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-tgnd2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pjsjj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 115s                   kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Normal   Starting                 2m10s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node no-preload-591175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node no-preload-591175 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m10s (x6 over 2m10s)  kubelet          Node no-preload-591175 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m1s                   kubelet          Node no-preload-591175 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m1s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m1s                   kubelet          Node no-preload-591175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m1s                   kubelet          Node no-preload-591175 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m1s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           117s                   node-controller  Node no-preload-591175 event: Registered Node no-preload-591175 in Controller
	  Normal   NodeReady                101s                   kubelet          Node no-preload-591175 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node no-preload-591175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node no-preload-591175 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node no-preload-591175 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node no-preload-591175 event: Registered Node no-preload-591175 in Controller
	
	
	==> dmesg <==
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	[Nov23 08:55] overlayfs: idmapped layers are currently not supported
	[Nov23 08:56] overlayfs: idmapped layers are currently not supported
	[Nov23 08:57] overlayfs: idmapped layers are currently not supported
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	[ +37.440319] overlayfs: idmapped layers are currently not supported
	[Nov23 08:59] overlayfs: idmapped layers are currently not supported
	[Nov23 09:00] overlayfs: idmapped layers are currently not supported
	[ +12.221002] overlayfs: idmapped layers are currently not supported
	[ +31.219239] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [aebf3ba174ff52b2d9016df7e7c2a73bddd769ac238a51aeefd85b75d890f557] <==
	{"level":"warn","ts":"2025-11-23T09:00:26.241831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.287177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.355775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.393165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.435411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.505037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.555241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.603984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.661732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.709923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.727630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.809834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.839405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.885751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.886571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.906129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.936046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.973472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.994300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:27.021095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:27.049098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:27.077859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:27.115474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:27.131084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:27.232627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37346","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:01:24 up  9:43,  0 user,  load average: 3.08, 3.37, 2.87
	Linux no-preload-591175 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [98061e6f3b0355afb9092940374e4137f051f66db6053d855478a46ce03c472c] <==
	I1123 09:00:30.261033       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:00:30.264273       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 09:00:30.264429       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:00:30.264443       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:00:30.264455       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:00:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:00:30.538588       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:00:30.538614       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:00:30.538622       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:00:30.538737       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:01:00.539043       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 09:01:00.539100       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 09:01:00.539229       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 09:01:00.539308       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1123 09:01:02.039889       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:01:02.039997       1 metrics.go:72] Registering metrics
	I1123 09:01:02.040103       1 controller.go:711] "Syncing nftables rules"
	I1123 09:01:10.523864       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:01:10.523963       1 main.go:301] handling current node
	I1123 09:01:20.527327       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:01:20.527358       1 main.go:301] handling current node
	
	
	==> kube-apiserver [157d0e0fd3e72e28588020ec573e4dedd42cd637d9021c7aaf88f84bb1ff9ca6] <==
	I1123 09:00:28.630525       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 09:00:28.640155       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 09:00:28.640797       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 09:00:28.650749       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 09:00:28.650825       1 policy_source.go:240] refreshing policies
	I1123 09:00:28.651027       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:00:28.665999       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:00:28.704431       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:00:28.709720       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:00:28.738080       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 09:00:28.741668       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:00:28.749617       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:00:28.809793       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1123 09:00:28.889681       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 09:00:29.029215       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:00:29.222194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:00:30.870910       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:00:30.988326       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:00:31.099339       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:00:31.135836       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:00:31.381632       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.30.26"}
	I1123 09:00:31.451952       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.89.206"}
	I1123 09:00:31.929481       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:00:32.146802       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:00:32.344962       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [9176ef57780eecfb0be6625d611ecda108774756a7a7ba2e04cae7ba6631a68b] <==
	I1123 09:00:31.904855       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-591175"
	I1123 09:00:31.904936       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 09:00:31.907286       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:00:31.907512       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:00:31.909127       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:31.909268       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:00:31.909307       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:00:31.915272       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 09:00:31.915495       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:00:31.931309       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:00:31.931296       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 09:00:31.931331       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:00:31.931353       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:00:31.934344       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:00:31.939310       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:00:31.939334       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:00:31.939342       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:00:31.943864       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:00:31.945065       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:31.951733       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:00:31.959338       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 09:00:31.971510       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:00:31.974696       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:00:31.976264       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:00:31.979358       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [e796016163ed388aa9e995b7fbf568cb73db2071a880fa33c913e398ff464229] <==
	I1123 09:00:30.921395       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:00:31.327134       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:00:31.436412       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:00:31.436445       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 09:00:31.436511       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:00:31.483636       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:00:31.483690       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:00:31.516055       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:00:31.523714       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:00:31.531375       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:31.572347       1 config.go:200] "Starting service config controller"
	I1123 09:00:31.572377       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:00:31.572403       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:00:31.572407       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:00:31.572422       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:00:31.572426       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:00:31.583788       1 config.go:309] "Starting node config controller"
	I1123 09:00:31.583852       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:00:31.583882       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:00:31.672669       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:00:31.672702       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:00:31.672750       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [84f5d17f9123dfb226e15a389bc9a5e5b2de8b259f1186f86f2f3673b2895055] <==
	I1123 09:00:23.086868       1 serving.go:386] Generated self-signed cert in-memory
	W1123 09:00:28.307973       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 09:00:28.308015       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:00:28.308025       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 09:00:28.308033       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 09:00:28.505066       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:00:28.505097       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:28.608773       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:00:28.608929       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:28.608949       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:28.608965       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:00:28.716236       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:00:32 no-preload-591175 kubelet[780]: I1123 09:00:32.766479     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-284vw\" (UniqueName: \"kubernetes.io/projected/362aa06f-c276-4d53-b60f-02c2feed6668-kube-api-access-284vw\") pod \"kubernetes-dashboard-855c9754f9-pjsjj\" (UID: \"362aa06f-c276-4d53-b60f-02c2feed6668\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pjsjj"
	Nov 23 09:00:32 no-preload-591175 kubelet[780]: I1123 09:00:32.766788     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/362aa06f-c276-4d53-b60f-02c2feed6668-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-pjsjj\" (UID: \"362aa06f-c276-4d53-b60f-02c2feed6668\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pjsjj"
	Nov 23 09:00:32 no-preload-591175 kubelet[780]: I1123 09:00:32.766852     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kr7g\" (UniqueName: \"kubernetes.io/projected/8e169862-196e-4afb-ad57-199e564f44e3-kube-api-access-8kr7g\") pod \"dashboard-metrics-scraper-6ffb444bf9-tgnd2\" (UID: \"8e169862-196e-4afb-ad57-199e564f44e3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2"
	Nov 23 09:00:32 no-preload-591175 kubelet[780]: I1123 09:00:32.766884     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8e169862-196e-4afb-ad57-199e564f44e3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-tgnd2\" (UID: \"8e169862-196e-4afb-ad57-199e564f44e3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2"
	Nov 23 09:00:33 no-preload-591175 kubelet[780]: W1123 09:00:33.068815     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/crio-8859b9c0184a85fa870afe83654c40ea3f6cddae4430369570eeccef4c2fb4a1 WatchSource:0}: Error finding container 8859b9c0184a85fa870afe83654c40ea3f6cddae4430369570eeccef4c2fb4a1: Status 404 returned error can't find the container with id 8859b9c0184a85fa870afe83654c40ea3f6cddae4430369570eeccef4c2fb4a1
	Nov 23 09:00:37 no-preload-591175 kubelet[780]: I1123 09:00:37.177297     780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 09:00:45 no-preload-591175 kubelet[780]: I1123 09:00:45.471892     780 scope.go:117] "RemoveContainer" containerID="c662e068a81fc6266452673e647d3a289feb6f32478889e431767e0380d2b133"
	Nov 23 09:00:45 no-preload-591175 kubelet[780]: I1123 09:00:45.508055     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pjsjj" podStartSLOduration=7.808804731 podStartE2EDuration="13.508038948s" podCreationTimestamp="2025-11-23 09:00:32 +0000 UTC" firstStartedPulling="2025-11-23 09:00:32.988145058 +0000 UTC m=+13.015821933" lastFinishedPulling="2025-11-23 09:00:38.68737925 +0000 UTC m=+18.715056150" observedRunningTime="2025-11-23 09:00:39.472603118 +0000 UTC m=+19.500280009" watchObservedRunningTime="2025-11-23 09:00:45.508038948 +0000 UTC m=+25.535715823"
	Nov 23 09:00:46 no-preload-591175 kubelet[780]: I1123 09:00:46.476280     780 scope.go:117] "RemoveContainer" containerID="c662e068a81fc6266452673e647d3a289feb6f32478889e431767e0380d2b133"
	Nov 23 09:00:46 no-preload-591175 kubelet[780]: I1123 09:00:46.477146     780 scope.go:117] "RemoveContainer" containerID="536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430"
	Nov 23 09:00:46 no-preload-591175 kubelet[780]: E1123 09:00:46.478329     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tgnd2_kubernetes-dashboard(8e169862-196e-4afb-ad57-199e564f44e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2" podUID="8e169862-196e-4afb-ad57-199e564f44e3"
	Nov 23 09:00:47 no-preload-591175 kubelet[780]: I1123 09:00:47.481268     780 scope.go:117] "RemoveContainer" containerID="536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430"
	Nov 23 09:00:47 no-preload-591175 kubelet[780]: E1123 09:00:47.482324     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tgnd2_kubernetes-dashboard(8e169862-196e-4afb-ad57-199e564f44e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2" podUID="8e169862-196e-4afb-ad57-199e564f44e3"
	Nov 23 09:00:53 no-preload-591175 kubelet[780]: I1123 09:00:53.002389     780 scope.go:117] "RemoveContainer" containerID="536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430"
	Nov 23 09:00:53 no-preload-591175 kubelet[780]: E1123 09:00:53.003277     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tgnd2_kubernetes-dashboard(8e169862-196e-4afb-ad57-199e564f44e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2" podUID="8e169862-196e-4afb-ad57-199e564f44e3"
	Nov 23 09:01:01 no-preload-591175 kubelet[780]: I1123 09:01:01.518896     780 scope.go:117] "RemoveContainer" containerID="95e135f4cdc6c76c534ff22368e28377e25b471ed736c58f590eae564658328b"
	Nov 23 09:01:04 no-preload-591175 kubelet[780]: I1123 09:01:04.276653     780 scope.go:117] "RemoveContainer" containerID="536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430"
	Nov 23 09:01:04 no-preload-591175 kubelet[780]: I1123 09:01:04.530228     780 scope.go:117] "RemoveContainer" containerID="536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430"
	Nov 23 09:01:04 no-preload-591175 kubelet[780]: I1123 09:01:04.530510     780 scope.go:117] "RemoveContainer" containerID="0f7817fdeccec014e739b087663b5baa22386f19c93acc1e9b1b90b8b9eea98b"
	Nov 23 09:01:04 no-preload-591175 kubelet[780]: E1123 09:01:04.530644     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tgnd2_kubernetes-dashboard(8e169862-196e-4afb-ad57-199e564f44e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2" podUID="8e169862-196e-4afb-ad57-199e564f44e3"
	Nov 23 09:01:13 no-preload-591175 kubelet[780]: I1123 09:01:13.001592     780 scope.go:117] "RemoveContainer" containerID="0f7817fdeccec014e739b087663b5baa22386f19c93acc1e9b1b90b8b9eea98b"
	Nov 23 09:01:13 no-preload-591175 kubelet[780]: E1123 09:01:13.001810     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tgnd2_kubernetes-dashboard(8e169862-196e-4afb-ad57-199e564f44e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2" podUID="8e169862-196e-4afb-ad57-199e564f44e3"
	Nov 23 09:01:21 no-preload-591175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:01:21 no-preload-591175 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:01:21 no-preload-591175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [74106f0c2a309342ef590081bd9557bf94fa83268eb9ee5ec4d761dd9cb1c240] <==
	2025/11/23 09:00:38 Using namespace: kubernetes-dashboard
	2025/11/23 09:00:38 Using in-cluster config to connect to apiserver
	2025/11/23 09:00:38 Using secret token for csrf signing
	2025/11/23 09:00:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 09:00:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 09:00:38 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 09:00:38 Generating JWE encryption key
	2025/11/23 09:00:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 09:00:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 09:00:40 Initializing JWE encryption key from synchronized object
	2025/11/23 09:00:40 Creating in-cluster Sidecar client
	2025/11/23 09:00:40 Serving insecurely on HTTP port: 9090
	2025/11/23 09:00:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:01:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:00:38 Starting overwatch
	
	
	==> storage-provisioner [95e135f4cdc6c76c534ff22368e28377e25b471ed736c58f590eae564658328b] <==
	I1123 09:00:30.858439       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 09:01:00.864725       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fbdf54514881b66bf257ad9c157d36d9d46f3a29c186b01a6f64ee63c4de43fb] <==
	I1123 09:01:01.635271       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:01:01.709398       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:01:01.710379       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:01:01.717447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:05.182719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:09.455466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:13.055388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:16.109273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:19.131206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:19.140021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:01:19.140219       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:01:19.141103       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-591175_3506b145-c0e5-4893-b3bf-6f8d18292f33!
	I1123 09:01:19.145982       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"600e7609-78b8-477b-9429-5d86b624370f", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-591175_3506b145-c0e5-4893-b3bf-6f8d18292f33 became leader
	W1123 09:01:19.148174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:19.151855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:01:19.242308       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-591175_3506b145-c0e5-4893-b3bf-6f8d18292f33!
	W1123 09:01:21.154532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:21.164555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:23.167573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:23.172660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-591175 -n no-preload-591175
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-591175 -n no-preload-591175: exit status 2 (372.163235ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-591175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-591175
helpers_test.go:243: (dbg) docker inspect no-preload-591175:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369",
	        "Created": "2025-11-23T08:58:38.098322261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1250560,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:00:12.722795342Z",
	            "FinishedAt": "2025-11-23T09:00:11.529118259Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/hostname",
	        "HostsPath": "/var/lib/docker/containers/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/hosts",
	        "LogPath": "/var/lib/docker/containers/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369-json.log",
	        "Name": "/no-preload-591175",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-591175:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-591175",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369",
	                "LowerDir": "/var/lib/docker/overlay2/771f258756c2bbb7a52acc018af18f3945b3a6a6c890b53f5dd366fd3977c014-init/diff:/var/lib/docker/overlay2/1daf7e78eaf87de97d39aa8ab93104f7f042993da991f05655ed9cacbb5e4c52/diff",
	                "MergedDir": "/var/lib/docker/overlay2/771f258756c2bbb7a52acc018af18f3945b3a6a6c890b53f5dd366fd3977c014/merged",
	                "UpperDir": "/var/lib/docker/overlay2/771f258756c2bbb7a52acc018af18f3945b3a6a6c890b53f5dd366fd3977c014/diff",
	                "WorkDir": "/var/lib/docker/overlay2/771f258756c2bbb7a52acc018af18f3945b3a6a6c890b53f5dd366fd3977c014/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-591175",
	                "Source": "/var/lib/docker/volumes/no-preload-591175/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-591175",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-591175",
	                "name.minikube.sigs.k8s.io": "no-preload-591175",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6718a243beeb99aacf9742136d0eb632fded191cda1d18b423049d24f16ab944",
	            "SandboxKey": "/var/run/docker/netns/6718a243beeb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34557"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34558"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34561"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34559"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34560"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-591175": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:34:c8:d3:57:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5cb890fde481b5761669b16b762b3e0bbd64d2ef935451546915fdbb684d58af",
	                    "EndpointID": "c001ae934474dd29d45b1a976a7659fa2804413ae2a5e62bbce07609d8435232",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-591175",
	                        "14f3744363b8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-591175 -n no-preload-591175
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-591175 -n no-preload-591175: exit status 2 (381.276273ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-591175 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-591175 logs -n 25: (1.314225704s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-262764 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p default-k8s-diff-port-262764                                                                                                                                                                                                               │ default-k8s-diff-port-262764 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ delete  │ -p disable-driver-mounts-880590                                                                                                                                                                                                               │ disable-driver-mounts-880590 │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:59 UTC │
	│ image   │ embed-certs-879861 image list --format=json                                                                                                                                                                                                   │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ pause   │ -p embed-certs-879861 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ delete  │ -p embed-certs-879861                                                                                                                                                                                                                         │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p embed-certs-879861                                                                                                                                                                                                                         │ embed-certs-879861           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ addons  │ enable metrics-server -p newest-cni-261704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-591175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │                     │
	│ stop    │ -p newest-cni-261704 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ stop    │ -p no-preload-591175 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:00 UTC │
	│ addons  │ enable dashboard -p newest-cni-261704 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:00 UTC │
	│ addons  │ enable dashboard -p no-preload-591175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ start   │ -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:01 UTC │
	│ image   │ newest-cni-261704 image list --format=json                                                                                                                                                                                                    │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ pause   │ -p newest-cni-261704 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ delete  │ -p newest-cni-261704                                                                                                                                                                                                                          │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ delete  │ -p newest-cni-261704                                                                                                                                                                                                                          │ newest-cni-261704            │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ start   │ -p auto-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-082524                  │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ image   │ no-preload-591175 image list --format=json                                                                                                                                                                                                    │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ pause   │ -p no-preload-591175 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-591175            │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:00:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:00:27.174663 1253581 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:27.175297 1253581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:27.175333 1253581 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:27.175356 1253581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:27.175650 1253581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 09:00:27.176135 1253581 out.go:368] Setting JSON to false
	I1123 09:00:27.177148 1253581 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34973,"bootTime":1763853455,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 09:00:27.177250 1253581 start.go:143] virtualization:  
	I1123 09:00:27.181465 1253581 out.go:179] * [auto-082524] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:00:27.185166 1253581 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 09:00:27.185245 1253581 notify.go:221] Checking for updates...
	I1123 09:00:27.189150 1253581 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:00:27.192453 1253581 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 09:00:27.195534 1253581 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 09:00:27.198742 1253581 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:00:27.201963 1253581 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:00:22.321008 1250435 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:00:22.321030 1250435 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:00:22.373481 1250435 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:00:22.373503 1250435 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:00:22.424313 1250435 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:00:22.424335 1250435 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:00:22.455150 1250435 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:00:22.455247 1250435 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:00:22.486676 1250435 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:00:22.486698 1250435 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:00:22.513499 1250435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:00:27.205619 1253581 config.go:182] Loaded profile config "no-preload-591175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:27.205770 1253581 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:00:27.252466 1253581 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:00:27.252647 1253581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:00:27.368318 1253581 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 09:00:27.353022416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:00:27.368430 1253581 docker.go:319] overlay module found
	I1123 09:00:27.371556 1253581 out.go:179] * Using the docker driver based on user configuration
	I1123 09:00:27.374481 1253581 start.go:309] selected driver: docker
	I1123 09:00:27.374498 1253581 start.go:927] validating driver "docker" against <nil>
	I1123 09:00:27.374511 1253581 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:00:27.375248 1253581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:00:27.477240 1253581 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 09:00:27.467330761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:00:27.477403 1253581 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:00:27.477630 1253581 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:00:27.480701 1253581 out.go:179] * Using Docker driver with root privileges
	I1123 09:00:27.483471 1253581 cni.go:84] Creating CNI manager for ""
	I1123 09:00:27.483543 1253581 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:00:27.483556 1253581 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:00:27.483633 1253581 start.go:353] cluster config:
	{Name:auto-082524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-082524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1123 09:00:27.486814 1253581 out.go:179] * Starting "auto-082524" primary control-plane node in "auto-082524" cluster
	I1123 09:00:27.489763 1253581 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:00:27.492583 1253581 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:00:27.495489 1253581 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:00:27.495556 1253581 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 09:00:27.495571 1253581 cache.go:65] Caching tarball of preloaded images
	I1123 09:00:27.495652 1253581 preload.go:238] Found /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:00:27.495667 1253581 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:00:27.495772 1253581 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/config.json ...
	I1123 09:00:27.495795 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/config.json: {Name:mk8307308d35f5a8dd72a039406096dc09879244 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:27.495948 1253581 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:00:27.514400 1253581 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:00:27.514424 1253581 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:00:27.514442 1253581 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:00:27.514478 1253581 start.go:360] acquireMachinesLock for auto-082524: {Name:mkbda3902800cc164468f93c9a878ecedc5d1cbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:00:27.514580 1253581 start.go:364] duration metric: took 81.802µs to acquireMachinesLock for "auto-082524"
	I1123 09:00:27.514616 1253581 start.go:93] Provisioning new machine with config: &{Name:auto-082524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-082524 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:00:27.514685 1253581 start.go:125] createHost starting for "" (driver="docker")
	I1123 09:00:28.463020 1250435 node_ready.go:49] node "no-preload-591175" is "Ready"
	I1123 09:00:28.463046 1250435 node_ready.go:38] duration metric: took 6.501185019s for node "no-preload-591175" to be "Ready" ...
	I1123 09:00:28.463060 1250435 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:00:28.463116 1250435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:00:28.698143 1250435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.785689937s)
	I1123 09:00:31.786472 1250435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.862247196s)
	I1123 09:00:31.786589 1250435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.273056322s)
	I1123 09:00:31.786691 1250435 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.323564019s)
	I1123 09:00:31.786710 1250435 api_server.go:72] duration metric: took 10.392698363s to wait for apiserver process to appear ...
	I1123 09:00:31.786716 1250435 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:00:31.786737 1250435 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 09:00:31.794310 1250435 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-591175 addons enable metrics-server
	
	I1123 09:00:31.814332 1250435 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 09:00:27.518066 1253581 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:00:27.518317 1253581 start.go:159] libmachine.API.Create for "auto-082524" (driver="docker")
	I1123 09:00:27.518352 1253581 client.go:173] LocalClient.Create starting
	I1123 09:00:27.518435 1253581 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem
	I1123 09:00:27.518473 1253581 main.go:143] libmachine: Decoding PEM data...
	I1123 09:00:27.518495 1253581 main.go:143] libmachine: Parsing certificate...
	I1123 09:00:27.518559 1253581 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem
	I1123 09:00:27.518584 1253581 main.go:143] libmachine: Decoding PEM data...
	I1123 09:00:27.518600 1253581 main.go:143] libmachine: Parsing certificate...
	I1123 09:00:27.519092 1253581 cli_runner.go:164] Run: docker network inspect auto-082524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:00:27.540842 1253581 cli_runner.go:211] docker network inspect auto-082524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:00:27.540933 1253581 network_create.go:284] running [docker network inspect auto-082524] to gather additional debugging logs...
	I1123 09:00:27.540966 1253581 cli_runner.go:164] Run: docker network inspect auto-082524
	W1123 09:00:27.560831 1253581 cli_runner.go:211] docker network inspect auto-082524 returned with exit code 1
	I1123 09:00:27.560863 1253581 network_create.go:287] error running [docker network inspect auto-082524]: docker network inspect auto-082524: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-082524 not found
	I1123 09:00:27.560878 1253581 network_create.go:289] output of [docker network inspect auto-082524]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-082524 not found
	
	** /stderr **
	I1123 09:00:27.560969 1253581 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:00:27.581238 1253581 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-32d396d9f7df IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:9b:29:4a:5c:ab} reservation:<nil>}
	I1123 09:00:27.581572 1253581 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-859c97accd92 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:ea:cf:62:f4:f8} reservation:<nil>}
	I1123 09:00:27.581889 1253581 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-50e966d7b39a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:1d:b6:b9:b9:ef} reservation:<nil>}
	I1123 09:00:27.582313 1253581 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a19150}
	I1123 09:00:27.582336 1253581 network_create.go:124] attempt to create docker network auto-082524 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 09:00:27.582395 1253581 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-082524 auto-082524
	I1123 09:00:27.678067 1253581 network_create.go:108] docker network auto-082524 192.168.76.0/24 created
	I1123 09:00:27.678102 1253581 kic.go:121] calculated static IP "192.168.76.2" for the "auto-082524" container
	I1123 09:00:27.678172 1253581 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:00:27.694493 1253581 cli_runner.go:164] Run: docker volume create auto-082524 --label name.minikube.sigs.k8s.io=auto-082524 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:00:27.711347 1253581 oci.go:103] Successfully created a docker volume auto-082524
	I1123 09:00:27.711430 1253581 cli_runner.go:164] Run: docker run --rm --name auto-082524-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-082524 --entrypoint /usr/bin/test -v auto-082524:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:00:28.408741 1253581 oci.go:107] Successfully prepared a docker volume auto-082524
	I1123 09:00:28.408798 1253581 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:00:28.408807 1253581 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:00:28.408870 1253581 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-082524:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 09:00:31.828410 1250435 addons.go:530] duration metric: took 10.433981611s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 09:00:31.863062 1250435 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 09:00:31.871386 1250435 api_server.go:141] control plane version: v1.34.1
	I1123 09:00:31.871422 1250435 api_server.go:131] duration metric: took 84.695167ms to wait for apiserver health ...
	I1123 09:00:31.871433 1250435 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:00:31.886982 1250435 system_pods.go:59] 8 kube-system pods found
	I1123 09:00:31.887028 1250435 system_pods.go:61] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:31.887037 1250435 system_pods.go:61] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 09:00:31.887043 1250435 system_pods.go:61] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 09:00:31.887051 1250435 system_pods.go:61] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:00:31.887059 1250435 system_pods.go:61] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:00:31.887063 1250435 system_pods.go:61] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 09:00:31.887072 1250435 system_pods.go:61] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:00:31.887076 1250435 system_pods.go:61] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Running
	I1123 09:00:31.887081 1250435 system_pods.go:74] duration metric: took 15.644046ms to wait for pod list to return data ...
	I1123 09:00:31.887094 1250435 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:00:31.919638 1250435 default_sa.go:45] found service account: "default"
	I1123 09:00:31.919674 1250435 default_sa.go:55] duration metric: took 32.572446ms for default service account to be created ...
	I1123 09:00:31.919686 1250435 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:00:31.946680 1250435 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:31.946720 1250435 system_pods.go:89] "coredns-66bc5c9577-zwlsw" [4493cf17-56c7-4aec-aff9-f1b7a47398ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:31.946729 1250435 system_pods.go:89] "etcd-no-preload-591175" [d2307eaa-f09d-4d85-8172-b403550f572f] Running
	I1123 09:00:31.946735 1250435 system_pods.go:89] "kindnet-v65j2" [c422d680-2063-435a-8b26-e265e3554728] Running
	I1123 09:00:31.946742 1250435 system_pods.go:89] "kube-apiserver-no-preload-591175" [07643f8f-afbf-48fd-9a2c-b68e6f2a69f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:00:31.946748 1250435 system_pods.go:89] "kube-controller-manager-no-preload-591175" [153ceee0-38e4-41e6-98bc-915c5d18b057] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:00:31.946753 1250435 system_pods.go:89] "kube-proxy-rblwh" [8c4a2941-2f19-43ba-8f9a-7a48072b1223] Running
	I1123 09:00:31.946765 1250435 system_pods.go:89] "kube-scheduler-no-preload-591175" [ce19b8a6-00bd-4cdc-a245-0a8f9551e38d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:00:31.946771 1250435 system_pods.go:89] "storage-provisioner" [923af3fc-5d78-45d7-ad14-fd020a72b76d] Running
	I1123 09:00:31.946780 1250435 system_pods.go:126] duration metric: took 27.087112ms to wait for k8s-apps to be running ...
	I1123 09:00:31.946791 1250435 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:00:31.946849 1250435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:00:31.963043 1250435 system_svc.go:56] duration metric: took 16.240583ms WaitForService to wait for kubelet
	I1123 09:00:31.963125 1250435 kubeadm.go:587] duration metric: took 10.569111374s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:00:31.963160 1250435 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:00:32.003125 1250435 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:00:32.003267 1250435 node_conditions.go:123] node cpu capacity is 2
	I1123 09:00:32.003301 1250435 node_conditions.go:105] duration metric: took 40.040256ms to run NodePressure ...
	I1123 09:00:32.003340 1250435 start.go:242] waiting for startup goroutines ...
	I1123 09:00:32.003365 1250435 start.go:247] waiting for cluster config update ...
	I1123 09:00:32.003391 1250435 start.go:256] writing updated cluster config ...
	I1123 09:00:32.004423 1250435 ssh_runner.go:195] Run: rm -f paused
	I1123 09:00:32.010008 1250435 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:00:32.034627 1250435 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zwlsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:33.271733 1253581 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-082524:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.862830407s)
	I1123 09:00:33.271770 1253581 kic.go:203] duration metric: took 4.862958435s to extract preloaded images to volume ...
	W1123 09:00:33.271909 1253581 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 09:00:33.272024 1253581 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:00:33.329240 1253581 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-082524 --name auto-082524 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-082524 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-082524 --network auto-082524 --ip 192.168.76.2 --volume auto-082524:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:00:33.728027 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Running}}
	I1123 09:00:33.751691 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Status}}
	I1123 09:00:33.780211 1253581 cli_runner.go:164] Run: docker exec auto-082524 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:00:33.842067 1253581 oci.go:144] the created container "auto-082524" has a running status.
	I1123 09:00:33.842094 1253581 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa...
	I1123 09:00:34.410946 1253581 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:00:34.453980 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Status}}
	I1123 09:00:34.489690 1253581 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:00:34.489721 1253581 kic_runner.go:114] Args: [docker exec --privileged auto-082524 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:00:34.579040 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Status}}
	I1123 09:00:34.605880 1253581 machine.go:94] provisionDockerMachine start ...
	I1123 09:00:34.605968 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:34.637108 1253581 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:34.637470 1253581 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34562 <nil> <nil>}
	I1123 09:00:34.637486 1253581 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:00:34.638043 1253581 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60518->127.0.0.1:34562: read: connection reset by peer
	W1123 09:00:34.053493 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:00:36.540278 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:00:37.799235 1253581 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-082524
	
	I1123 09:00:37.799261 1253581 ubuntu.go:182] provisioning hostname "auto-082524"
	I1123 09:00:37.799323 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:37.829680 1253581 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:37.829992 1253581 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34562 <nil> <nil>}
	I1123 09:00:37.830009 1253581 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-082524 && echo "auto-082524" | sudo tee /etc/hostname
	I1123 09:00:38.004218 1253581 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-082524
	
	I1123 09:00:38.004323 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:38.032213 1253581 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:38.032538 1253581 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34562 <nil> <nil>}
	I1123 09:00:38.032555 1253581 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-082524' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-082524/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-082524' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:00:38.196629 1253581 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:00:38.196703 1253581 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-1041293/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-1041293/.minikube}
	I1123 09:00:38.196748 1253581 ubuntu.go:190] setting up certificates
	I1123 09:00:38.196789 1253581 provision.go:84] configureAuth start
	I1123 09:00:38.196871 1253581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-082524
	I1123 09:00:38.216617 1253581 provision.go:143] copyHostCerts
	I1123 09:00:38.216673 1253581 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem, removing ...
	I1123 09:00:38.216681 1253581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem
	I1123 09:00:38.216746 1253581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.pem (1078 bytes)
	I1123 09:00:38.216836 1253581 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem, removing ...
	I1123 09:00:38.216842 1253581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem
	I1123 09:00:38.216871 1253581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/cert.pem (1123 bytes)
	I1123 09:00:38.216928 1253581 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem, removing ...
	I1123 09:00:38.216932 1253581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem
	I1123 09:00:38.216955 1253581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-1041293/.minikube/key.pem (1675 bytes)
	I1123 09:00:38.217006 1253581 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem org=jenkins.auto-082524 san=[127.0.0.1 192.168.76.2 auto-082524 localhost minikube]
	I1123 09:00:38.433992 1253581 provision.go:177] copyRemoteCerts
	I1123 09:00:38.434099 1253581 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:00:38.434181 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:38.451940 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:00:38.581808 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1123 09:00:38.607485 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:00:38.632512 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:00:38.653189 1253581 provision.go:87] duration metric: took 456.365813ms to configureAuth
	I1123 09:00:38.653265 1253581 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:00:38.653496 1253581 config.go:182] Loaded profile config "auto-082524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:38.653652 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:38.672512 1253581 main.go:143] libmachine: Using SSH client type: native
	I1123 09:00:38.672836 1253581 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 34562 <nil> <nil>}
	I1123 09:00:38.672850 1253581 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:00:39.092799 1253581 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:00:39.092909 1253581 machine.go:97] duration metric: took 4.487006469s to provisionDockerMachine
	I1123 09:00:39.092935 1253581 client.go:176] duration metric: took 11.574572231s to LocalClient.Create
	I1123 09:00:39.092987 1253581 start.go:167] duration metric: took 11.574652401s to libmachine.API.Create "auto-082524"
	I1123 09:00:39.093012 1253581 start.go:293] postStartSetup for "auto-082524" (driver="docker")
	I1123 09:00:39.093038 1253581 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:00:39.093127 1253581 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:00:39.093198 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:39.120709 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:00:39.239958 1253581 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:00:39.244171 1253581 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:00:39.244200 1253581 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:00:39.244212 1253581 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/addons for local assets ...
	I1123 09:00:39.244269 1253581 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-1041293/.minikube/files for local assets ...
	I1123 09:00:39.244359 1253581 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem -> 10431592.pem in /etc/ssl/certs
	I1123 09:00:39.244474 1253581 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:00:39.259041 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 09:00:39.288930 1253581 start.go:296] duration metric: took 195.889012ms for postStartSetup
	I1123 09:00:39.289290 1253581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-082524
	I1123 09:00:39.321165 1253581 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/config.json ...
	I1123 09:00:39.321442 1253581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:00:39.321491 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:39.351353 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:00:39.472116 1253581 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:00:39.477644 1253581 start.go:128] duration metric: took 11.962944239s to createHost
	I1123 09:00:39.477674 1253581 start.go:83] releasing machines lock for "auto-082524", held for 11.963080809s
	I1123 09:00:39.477748 1253581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-082524
	I1123 09:00:39.497515 1253581 ssh_runner.go:195] Run: cat /version.json
	I1123 09:00:39.497569 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:39.497822 1253581 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:00:39.497879 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:00:39.524209 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:00:39.551176 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:00:39.645267 1253581 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:39.778359 1253581 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:00:39.856101 1253581 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:00:39.862127 1253581 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:00:39.862200 1253581 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:00:39.903139 1253581 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 09:00:39.903241 1253581 start.go:496] detecting cgroup driver to use...
	I1123 09:00:39.903287 1253581 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:00:39.903370 1253581 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:00:39.926031 1253581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:00:39.947207 1253581 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:00:39.947270 1253581 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:00:39.968057 1253581 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:00:39.991554 1253581 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:00:40.199394 1253581 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:00:40.394996 1253581 docker.go:234] disabling docker service ...
	I1123 09:00:40.395059 1253581 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:00:40.420115 1253581 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:00:40.438246 1253581 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:00:40.606105 1253581 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:00:40.766591 1253581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:00:40.786631 1253581 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:00:40.801834 1253581 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:00:40.801910 1253581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.810982 1253581 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:00:40.811047 1253581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.820627 1253581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.829685 1253581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.838412 1253581 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:00:40.847813 1253581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.857615 1253581 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.883339 1253581 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:00:40.892764 1253581 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:00:40.901716 1253581 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:00:40.910056 1253581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:41.080514 1253581 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:00:41.770646 1253581 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:00:41.770722 1253581 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:00:41.775554 1253581 start.go:564] Will wait 60s for crictl version
	I1123 09:00:41.775618 1253581 ssh_runner.go:195] Run: which crictl
	I1123 09:00:41.780031 1253581 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:00:41.821712 1253581 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:00:41.821804 1253581 ssh_runner.go:195] Run: crio --version
	I1123 09:00:41.855892 1253581 ssh_runner.go:195] Run: crio --version
	I1123 09:00:41.892530 1253581 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:00:41.895579 1253581 cli_runner.go:164] Run: docker network inspect auto-082524 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:00:41.912867 1253581 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 09:00:41.917333 1253581 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:00:41.932843 1253581 kubeadm.go:884] updating cluster {Name:auto-082524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-082524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:00:41.932970 1253581 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:00:41.933025 1253581 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:00:41.975383 1253581 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:00:41.975404 1253581 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:00:41.975459 1253581 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:00:42.036417 1253581 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:00:42.036491 1253581 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:00:42.036526 1253581 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 09:00:42.036686 1253581 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-082524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-082524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:00:42.036809 1253581 ssh_runner.go:195] Run: crio config
	I1123 09:00:42.167256 1253581 cni.go:84] Creating CNI manager for ""
	I1123 09:00:42.167347 1253581 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:00:42.167389 1253581 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:00:42.167447 1253581 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-082524 NodeName:auto-082524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:00:42.167778 1253581 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-082524"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:00:42.167931 1253581 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	W1123 09:00:38.541601 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:00:40.543850 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:00:42.187599 1253581 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:00:42.187768 1253581 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:00:42.201507 1253581 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1123 09:00:42.225027 1253581 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:00:42.255037 1253581 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1123 09:00:42.275241 1253581 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:00:42.282345 1253581 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:00:42.297897 1253581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:42.461706 1253581 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:00:42.483152 1253581 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524 for IP: 192.168.76.2
	I1123 09:00:42.483252 1253581 certs.go:195] generating shared ca certs ...
	I1123 09:00:42.483283 1253581 certs.go:227] acquiring lock for ca certs: {Name:mk8b2dd1177c57b74f955f055073d275001ee616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:42.483514 1253581 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key
	I1123 09:00:42.483608 1253581 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key
	I1123 09:00:42.483650 1253581 certs.go:257] generating profile certs ...
	I1123 09:00:42.483740 1253581 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.key
	I1123 09:00:42.483771 1253581 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.crt with IP's: []
	I1123 09:00:42.754685 1253581 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.crt ...
	I1123 09:00:42.754764 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.crt: {Name:mkb5a2df2d6fc7d1c2c79cc42d5f3e5c1ce1431d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:42.754982 1253581 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.key ...
	I1123 09:00:42.755017 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.key: {Name:mk0cb10441f015badf7f4250625e61214d3ef0c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:42.755161 1253581 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.key.0c00d17a
	I1123 09:00:42.755230 1253581 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.crt.0c00d17a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 09:00:42.976701 1253581 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.crt.0c00d17a ...
	I1123 09:00:42.976770 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.crt.0c00d17a: {Name:mk12b357555e3b98ad8a5e031be2a7c68f8dbaff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:42.976960 1253581 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.key.0c00d17a ...
	I1123 09:00:42.976996 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.key.0c00d17a: {Name:mkd82361e7134cbeb1f8451263c94b7d64c8d187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:42.977133 1253581 certs.go:382] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.crt.0c00d17a -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.crt
	I1123 09:00:42.977252 1253581 certs.go:386] copying /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.key.0c00d17a -> /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.key
	I1123 09:00:42.977370 1253581 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.key
	I1123 09:00:42.977408 1253581 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.crt with IP's: []
	I1123 09:00:43.073239 1253581 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.crt ...
	I1123 09:00:43.073270 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.crt: {Name:mk65602301f27ead6e2f766b14e4c70d5dcc7e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:43.073459 1253581 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.key ...
	I1123 09:00:43.073475 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.key: {Name:mkd414626ed6d88527cbb7f9a9a23e8ea98a0db8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:43.073714 1253581 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem (1338 bytes)
	W1123 09:00:43.073780 1253581 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159_empty.pem, impossibly tiny 0 bytes
	I1123 09:00:43.073795 1253581 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:00:43.073848 1253581 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:00:43.073892 1253581 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:00:43.073933 1253581 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/key.pem (1675 bytes)
	I1123 09:00:43.073999 1253581 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem (1708 bytes)
	I1123 09:00:43.074599 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:00:43.096166 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:00:43.115695 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:00:43.137353 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 09:00:43.156779 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1123 09:00:43.176169 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:00:43.197918 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:00:43.217494 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:00:43.236423 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:00:43.255644 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/certs/1043159.pem --> /usr/share/ca-certificates/1043159.pem (1338 bytes)
	I1123 09:00:43.274099 1253581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/ssl/certs/10431592.pem --> /usr/share/ca-certificates/10431592.pem (1708 bytes)
	I1123 09:00:43.295858 1253581 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:00:43.309901 1253581 ssh_runner.go:195] Run: openssl version
	I1123 09:00:43.319137 1253581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:00:43.328614 1253581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:00:43.333046 1253581 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:56 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:00:43.333140 1253581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:00:43.392187 1253581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:00:43.402528 1253581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1043159.pem && ln -fs /usr/share/ca-certificates/1043159.pem /etc/ssl/certs/1043159.pem"
	I1123 09:00:43.422210 1253581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1043159.pem
	I1123 09:00:43.426825 1253581 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:03 /usr/share/ca-certificates/1043159.pem
	I1123 09:00:43.426918 1253581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1043159.pem
	I1123 09:00:43.508095 1253581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1043159.pem /etc/ssl/certs/51391683.0"
	I1123 09:00:43.522063 1253581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10431592.pem && ln -fs /usr/share/ca-certificates/10431592.pem /etc/ssl/certs/10431592.pem"
	I1123 09:00:43.531144 1253581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10431592.pem
	I1123 09:00:43.535325 1253581 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:03 /usr/share/ca-certificates/10431592.pem
	I1123 09:00:43.535421 1253581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10431592.pem
	I1123 09:00:43.576875 1253581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10431592.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:00:43.587718 1253581 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:00:43.592261 1253581 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:00:43.592343 1253581 kubeadm.go:401] StartCluster: {Name:auto-082524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-082524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:00:43.592429 1253581 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:00:43.592528 1253581 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:00:43.621353 1253581 cri.go:89] found id: ""
	I1123 09:00:43.621450 1253581 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:00:43.633005 1253581 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:00:43.644389 1253581 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:00:43.644473 1253581 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:00:43.659438 1253581 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:00:43.659458 1253581 kubeadm.go:158] found existing configuration files:
	
	I1123 09:00:43.659543 1253581 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 09:00:43.668585 1253581 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:00:43.668672 1253581 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:00:43.676719 1253581 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 09:00:43.685454 1253581 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:00:43.685547 1253581 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:00:43.693319 1253581 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 09:00:43.701814 1253581 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:00:43.701909 1253581 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:00:43.709594 1253581 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 09:00:43.718338 1253581 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:00:43.718432 1253581 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:00:43.726148 1253581 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:00:43.788657 1253581 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 09:00:43.789219 1253581 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 09:00:43.830497 1253581 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 09:00:43.830608 1253581 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 09:00:43.830702 1253581 kubeadm.go:319] OS: Linux
	I1123 09:00:43.830797 1253581 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 09:00:43.830873 1253581 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 09:00:43.830957 1253581 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 09:00:43.831038 1253581 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 09:00:43.831118 1253581 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 09:00:43.831218 1253581 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 09:00:43.831291 1253581 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 09:00:43.831360 1253581 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 09:00:43.831463 1253581 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 09:00:43.916383 1253581 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 09:00:43.916551 1253581 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 09:00:43.916659 1253581 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 09:00:43.927545 1253581 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 09:00:43.936289 1253581 out.go:252]   - Generating certificates and keys ...
	I1123 09:00:43.936390 1253581 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 09:00:43.936468 1253581 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 09:00:44.530191 1253581 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 09:00:45.943987 1253581 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 09:00:46.826249 1253581 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 09:00:46.979457 1253581 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	W1123 09:00:43.041182 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:00:45.042115 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:00:47.458716 1253581 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 09:00:47.459125 1253581 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-082524 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 09:00:47.998507 1253581 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 09:00:47.998645 1253581 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-082524 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 09:00:48.380812 1253581 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:00:48.511077 1253581 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 09:00:48.854434 1253581 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:00:48.854735 1253581 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:00:49.288584 1253581 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:00:49.714490 1253581 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:00:50.098139 1253581 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:00:50.567609 1253581 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:00:50.720479 1253581 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:00:50.721048 1253581 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:00:50.723632 1253581 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:00:50.727304 1253581 out.go:252]   - Booting up control plane ...
	I1123 09:00:50.727406 1253581 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:00:50.727484 1253581 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:00:50.727551 1253581 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:00:50.742932 1253581 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:00:50.743051 1253581 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:00:50.749874 1253581 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:00:50.750436 1253581 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:00:50.750683 1253581 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:00:50.891840 1253581 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:00:50.891984 1253581 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 09:00:51.892989 1253581 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001634135s
	I1123 09:00:51.896672 1253581 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 09:00:51.896769 1253581 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 09:00:51.897119 1253581 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 09:00:51.897212 1253581 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1123 09:00:47.541879 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:00:49.543132 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:00:52.041708 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:00:56.478046 1253581 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.580993089s
	I1123 09:00:57.020571 1253581 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.123854904s
	W1123 09:00:54.049554 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:00:56.540484 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:00:57.898604 1253581 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001843493s
	I1123 09:00:57.923900 1253581 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:00:57.946691 1253581 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:00:57.959135 1253581 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:00:57.959373 1253581 kubeadm.go:319] [mark-control-plane] Marking the node auto-082524 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:00:57.974900 1253581 kubeadm.go:319] [bootstrap-token] Using token: hg5tsc.npysbiukpzp0hebw
	I1123 09:00:57.977791 1253581 out.go:252]   - Configuring RBAC rules ...
	I1123 09:00:57.977923 1253581 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:00:57.981631 1253581 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:00:57.989617 1253581 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:00:57.993491 1253581 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:00:58.002282 1253581 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:00:58.011263 1253581 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:00:58.309352 1253581 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:00:58.746681 1253581 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:00:59.309119 1253581 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:00:59.312413 1253581 kubeadm.go:319] 
	I1123 09:00:59.312490 1253581 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:00:59.312495 1253581 kubeadm.go:319] 
	I1123 09:00:59.312581 1253581 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:00:59.312586 1253581 kubeadm.go:319] 
	I1123 09:00:59.312611 1253581 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:00:59.312670 1253581 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:00:59.312726 1253581 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:00:59.312730 1253581 kubeadm.go:319] 
	I1123 09:00:59.312784 1253581 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:00:59.312788 1253581 kubeadm.go:319] 
	I1123 09:00:59.312835 1253581 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:00:59.312840 1253581 kubeadm.go:319] 
	I1123 09:00:59.312892 1253581 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:00:59.312974 1253581 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:00:59.313044 1253581 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:00:59.313047 1253581 kubeadm.go:319] 
	I1123 09:00:59.313132 1253581 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:00:59.313211 1253581 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:00:59.313215 1253581 kubeadm.go:319] 
	I1123 09:00:59.313299 1253581 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hg5tsc.npysbiukpzp0hebw \
	I1123 09:00:59.313403 1253581 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb \
	I1123 09:00:59.313423 1253581 kubeadm.go:319] 	--control-plane 
	I1123 09:00:59.313435 1253581 kubeadm.go:319] 
	I1123 09:00:59.313521 1253581 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:00:59.313526 1253581 kubeadm.go:319] 
	I1123 09:00:59.313608 1253581 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hg5tsc.npysbiukpzp0hebw \
	I1123 09:00:59.313710 1253581 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e6c64110c455e4c14d22f72e74bf38a802f7f936ff90c9cbf912e3ab6e0d3eb 
	I1123 09:00:59.318673 1253581 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 09:00:59.318920 1253581 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 09:00:59.319033 1253581 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:00:59.319122 1253581 cni.go:84] Creating CNI manager for ""
	I1123 09:00:59.319137 1253581 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:00:59.324158 1253581 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:00:59.326936 1253581 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:00:59.331618 1253581 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:00:59.331639 1253581 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:00:59.344288 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:01:00.151076 1253581 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:01:00.151265 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-082524 minikube.k8s.io/updated_at=2025_11_23T09_01_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=auto-082524 minikube.k8s.io/primary=true
	I1123 09:01:00.151265 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:00.508403 1253581 ops.go:34] apiserver oom_adj: -16
	I1123 09:01:00.508439 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:01.009349 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:01.509416 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:02.008481 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1123 09:00:58.541297 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:01:01.040584 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:01:02.508969 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:03.009512 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:03.508753 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:04.008553 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:04.508546 1253581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:01:04.678730 1253581 kubeadm.go:1114] duration metric: took 4.527617369s to wait for elevateKubeSystemPrivileges
	I1123 09:01:04.678763 1253581 kubeadm.go:403] duration metric: took 21.086424458s to StartCluster
	I1123 09:01:04.678780 1253581 settings.go:142] acquiring lock: {Name:mk23f3092f33e47ced9558cb4bac2b30c55547fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:01:04.678840 1253581 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 09:01:04.679873 1253581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-1041293/kubeconfig: {Name:mkcf9e0bbf24371418de92eff3c9c3ea5d063f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:01:04.680098 1253581 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:01:04.680183 1253581 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:01:04.680410 1253581 config.go:182] Loaded profile config "auto-082524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:01:04.680451 1253581 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:01:04.680515 1253581 addons.go:70] Setting storage-provisioner=true in profile "auto-082524"
	I1123 09:01:04.680530 1253581 addons.go:239] Setting addon storage-provisioner=true in "auto-082524"
	I1123 09:01:04.680556 1253581 host.go:66] Checking if "auto-082524" exists ...
	I1123 09:01:04.681036 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Status}}
	I1123 09:01:04.681503 1253581 addons.go:70] Setting default-storageclass=true in profile "auto-082524"
	I1123 09:01:04.681527 1253581 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-082524"
	I1123 09:01:04.681839 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Status}}
	I1123 09:01:04.683461 1253581 out.go:179] * Verifying Kubernetes components...
	I1123 09:01:04.686614 1253581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:01:04.721303 1253581 addons.go:239] Setting addon default-storageclass=true in "auto-082524"
	I1123 09:01:04.721341 1253581 host.go:66] Checking if "auto-082524" exists ...
	I1123 09:01:04.721796 1253581 cli_runner.go:164] Run: docker container inspect auto-082524 --format={{.State.Status}}
	I1123 09:01:04.729054 1253581 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:01:04.734485 1253581 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:01:04.734509 1253581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:01:04.734572 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:01:04.756168 1253581 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:01:04.756194 1253581 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:01:04.756302 1253581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-082524
	I1123 09:01:04.805057 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:01:04.806551 1253581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34562 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/auto-082524/id_rsa Username:docker}
	I1123 09:01:05.208872 1253581 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:01:05.216571 1253581 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:01:05.216672 1253581 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:01:05.257082 1253581 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:01:05.656562 1253581 node_ready.go:35] waiting up to 15m0s for node "auto-082524" to be "Ready" ...
	I1123 09:01:05.656796 1253581 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 09:01:05.963123 1253581 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 09:01:05.965999 1253581 addons.go:530] duration metric: took 1.285536743s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 09:01:06.160766 1253581 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-082524" context rescaled to 1 replicas
	W1123 09:01:03.541571 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	W1123 09:01:06.040133 1250435 pod_ready.go:104] pod "coredns-66bc5c9577-zwlsw" is not "Ready", error: <nil>
	I1123 09:01:07.542102 1250435 pod_ready.go:94] pod "coredns-66bc5c9577-zwlsw" is "Ready"
	I1123 09:01:07.542131 1250435 pod_ready.go:86] duration metric: took 35.507438116s for pod "coredns-66bc5c9577-zwlsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.544681 1250435 pod_ready.go:83] waiting for pod "etcd-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.550183 1250435 pod_ready.go:94] pod "etcd-no-preload-591175" is "Ready"
	I1123 09:01:07.550213 1250435 pod_ready.go:86] duration metric: took 5.505879ms for pod "etcd-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.552219 1250435 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.557404 1250435 pod_ready.go:94] pod "kube-apiserver-no-preload-591175" is "Ready"
	I1123 09:01:07.557440 1250435 pod_ready.go:86] duration metric: took 5.189359ms for pod "kube-apiserver-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.559641 1250435 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.739120 1250435 pod_ready.go:94] pod "kube-controller-manager-no-preload-591175" is "Ready"
	I1123 09:01:07.739243 1250435 pod_ready.go:86] duration metric: took 179.577384ms for pod "kube-controller-manager-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:07.940117 1250435 pod_ready.go:83] waiting for pod "kube-proxy-rblwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.338393 1250435 pod_ready.go:94] pod "kube-proxy-rblwh" is "Ready"
	I1123 09:01:08.338430 1250435 pod_ready.go:86] duration metric: took 398.278599ms for pod "kube-proxy-rblwh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.538441 1250435 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.939165 1250435 pod_ready.go:94] pod "kube-scheduler-no-preload-591175" is "Ready"
	I1123 09:01:08.939261 1250435 pod_ready.go:86] duration metric: took 400.776941ms for pod "kube-scheduler-no-preload-591175" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.939275 1250435 pod_ready.go:40] duration metric: took 36.929188288s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:09.012877 1250435 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:01:09.016181 1250435 out.go:179] * Done! kubectl is now configured to use "no-preload-591175" cluster and "default" namespace by default
	W1123 09:01:07.660267 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	W1123 09:01:10.159933 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	W1123 09:01:12.160427 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	W1123 09:01:14.659325 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	W1123 09:01:16.659625 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	W1123 09:01:19.160082 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	W1123 09:01:21.659925 1253581 node_ready.go:57] node "auto-082524" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.280369871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.287752907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.288737544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.308452202Z" level=info msg="Created container 0f7817fdeccec014e739b087663b5baa22386f19c93acc1e9b1b90b8b9eea98b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2/dashboard-metrics-scraper" id=286d76ff-e87c-4588-a857-f52c8b15ca32 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.309504267Z" level=info msg="Starting container: 0f7817fdeccec014e739b087663b5baa22386f19c93acc1e9b1b90b8b9eea98b" id=524462fc-0e9c-4cc2-a155-fefac8fc8c35 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.31184953Z" level=info msg="Started container" PID=1640 containerID=0f7817fdeccec014e739b087663b5baa22386f19c93acc1e9b1b90b8b9eea98b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2/dashboard-metrics-scraper id=524462fc-0e9c-4cc2-a155-fefac8fc8c35 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8859b9c0184a85fa870afe83654c40ea3f6cddae4430369570eeccef4c2fb4a1
	Nov 23 09:01:04 no-preload-591175 conmon[1638]: conmon 0f7817fdeccec014e739 <ninfo>: container 1640 exited with status 1
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.543858522Z" level=info msg="Removing container: 536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430" id=13a80dcd-eeb0-43d4-a488-91a0bb67388c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.553324551Z" level=info msg="Error loading conmon cgroup of container 536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430: cgroup deleted" id=13a80dcd-eeb0-43d4-a488-91a0bb67388c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:01:04 no-preload-591175 crio[658]: time="2025-11-23T09:01:04.560664553Z" level=info msg="Removed container 536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2/dashboard-metrics-scraper" id=13a80dcd-eeb0-43d4-a488-91a0bb67388c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.524235555Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.531519944Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.531555881Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.531578961Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.534848424Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.534889768Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.534916336Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.538095077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.538241312Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.538275904Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.541423557Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.541560447Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.541596532Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.545785509Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:01:10 no-preload-591175 crio[658]: time="2025-11-23T09:01:10.545937735Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0f7817fdeccec       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   8859b9c0184a8       dashboard-metrics-scraper-6ffb444bf9-tgnd2   kubernetes-dashboard
	fbdf54514881b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   1b729bfa6a0a2       storage-provisioner                          kube-system
	74106f0c2a309       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   4182ebc0e8626       kubernetes-dashboard-855c9754f9-pjsjj        kubernetes-dashboard
	2fcc0e19109bc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   47ebb33775180       coredns-66bc5c9577-zwlsw                     kube-system
	8c822177d7824       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   866be0d2a21b5       busybox                                      default
	95e135f4cdc6c       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           55 seconds ago       Exited              storage-provisioner         1                   1b729bfa6a0a2       storage-provisioner                          kube-system
	e796016163ed3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   5e7665d9ae55b       kube-proxy-rblwh                             kube-system
	98061e6f3b035       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   8cb02715115d4       kindnet-v65j2                                kube-system
	9176ef57780ee       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   48d249b39a2f5       kube-controller-manager-no-preload-591175    kube-system
	157d0e0fd3e72       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   76aeb73f60348       kube-apiserver-no-preload-591175             kube-system
	aebf3ba174ff5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   b637466a49a97       etcd-no-preload-591175                       kube-system
	84f5d17f9123d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   2e6d4f1213e5b       kube-scheduler-no-preload-591175             kube-system
	
	
	==> coredns [2fcc0e19109bc84ca2b9d741452c957bdab6dd11b089a54216971f59ea750720] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36412 - 20243 "HINFO IN 7023655903771878389.6344257054590995337. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012916968s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-591175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-591175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=no-preload-591175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_59_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:59:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-591175
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:01:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:00:59 +0000   Sun, 23 Nov 2025 08:59:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:00:59 +0000   Sun, 23 Nov 2025 08:59:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:00:59 +0000   Sun, 23 Nov 2025 08:59:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:00:59 +0000   Sun, 23 Nov 2025 08:59:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-591175
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                f436885f-b4ec-44fe-a494-6bb1784496fe
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-zwlsw                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     119s
	  kube-system                 etcd-no-preload-591175                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m4s
	  kube-system                 kindnet-v65j2                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      119s
	  kube-system                 kube-apiserver-no-preload-591175              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-no-preload-591175     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-rblwh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-no-preload-591175              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-tgnd2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pjsjj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 117s                   kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   Starting                 2m13s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node no-preload-591175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node no-preload-591175 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m13s (x6 over 2m13s)  kubelet          Node no-preload-591175 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m4s                   kubelet          Node no-preload-591175 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m4s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m4s                   kubelet          Node no-preload-591175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m4s                   kubelet          Node no-preload-591175 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m                     node-controller  Node no-preload-591175 event: Registered Node no-preload-591175 in Controller
	  Normal   NodeReady                104s                   kubelet          Node no-preload-591175 status is now: NodeReady
	  Normal   Starting                 66s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node no-preload-591175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node no-preload-591175 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node no-preload-591175 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node no-preload-591175 event: Registered Node no-preload-591175 in Controller
	
	
	==> dmesg <==
	[Nov23 08:39] overlayfs: idmapped layers are currently not supported
	[ +25.090966] overlayfs: idmapped layers are currently not supported
	[Nov23 08:40] overlayfs: idmapped layers are currently not supported
	[ +26.896711] overlayfs: idmapped layers are currently not supported
	[Nov23 08:41] overlayfs: idmapped layers are currently not supported
	[Nov23 08:43] overlayfs: idmapped layers are currently not supported
	[Nov23 08:45] overlayfs: idmapped layers are currently not supported
	[Nov23 08:46] overlayfs: idmapped layers are currently not supported
	[Nov23 08:47] overlayfs: idmapped layers are currently not supported
	[Nov23 08:49] overlayfs: idmapped layers are currently not supported
	[Nov23 08:51] overlayfs: idmapped layers are currently not supported
	[ +55.116920] overlayfs: idmapped layers are currently not supported
	[Nov23 08:52] overlayfs: idmapped layers are currently not supported
	[  +5.731396] overlayfs: idmapped layers are currently not supported
	[Nov23 08:53] overlayfs: idmapped layers are currently not supported
	[Nov23 08:54] overlayfs: idmapped layers are currently not supported
	[Nov23 08:55] overlayfs: idmapped layers are currently not supported
	[Nov23 08:56] overlayfs: idmapped layers are currently not supported
	[Nov23 08:57] overlayfs: idmapped layers are currently not supported
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	[ +37.440319] overlayfs: idmapped layers are currently not supported
	[Nov23 08:59] overlayfs: idmapped layers are currently not supported
	[Nov23 09:00] overlayfs: idmapped layers are currently not supported
	[ +12.221002] overlayfs: idmapped layers are currently not supported
	[ +31.219239] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [aebf3ba174ff52b2d9016df7e7c2a73bddd769ac238a51aeefd85b75d890f557] <==
	{"level":"warn","ts":"2025-11-23T09:00:26.241831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.287177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.355775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.393165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.435411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.505037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.555241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.603984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.661732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.709923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.727630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.809834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.839405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.885751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.886571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.906129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.936046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.973472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:26.994300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:27.021095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:27.049098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:27.077859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:27.115474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:27.131084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:27.232627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37346","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:01:26 up  9:43,  0 user,  load average: 3.08, 3.37, 2.87
	Linux no-preload-591175 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [98061e6f3b0355afb9092940374e4137f051f66db6053d855478a46ce03c472c] <==
	I1123 09:00:30.261033       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:00:30.264273       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 09:00:30.264429       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:00:30.264443       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:00:30.264455       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:00:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:00:30.538588       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:00:30.538614       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:00:30.538622       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:00:30.538737       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:01:00.539043       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 09:01:00.539100       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 09:01:00.539229       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 09:01:00.539308       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1123 09:01:02.039889       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:01:02.039997       1 metrics.go:72] Registering metrics
	I1123 09:01:02.040103       1 controller.go:711] "Syncing nftables rules"
	I1123 09:01:10.523864       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:01:10.523963       1 main.go:301] handling current node
	I1123 09:01:20.527327       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:01:20.527358       1 main.go:301] handling current node
	
	
	==> kube-apiserver [157d0e0fd3e72e28588020ec573e4dedd42cd637d9021c7aaf88f84bb1ff9ca6] <==
	I1123 09:00:28.630525       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 09:00:28.640155       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 09:00:28.640797       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 09:00:28.650749       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 09:00:28.650825       1 policy_source.go:240] refreshing policies
	I1123 09:00:28.651027       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:00:28.665999       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:00:28.704431       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:00:28.709720       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:00:28.738080       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 09:00:28.741668       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:00:28.749617       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:00:28.809793       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1123 09:00:28.889681       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 09:00:29.029215       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:00:29.222194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:00:30.870910       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:00:30.988326       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:00:31.099339       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:00:31.135836       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:00:31.381632       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.30.26"}
	I1123 09:00:31.451952       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.89.206"}
	I1123 09:00:31.929481       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:00:32.146802       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:00:32.344962       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [9176ef57780eecfb0be6625d611ecda108774756a7a7ba2e04cae7ba6631a68b] <==
	I1123 09:00:31.904855       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-591175"
	I1123 09:00:31.904936       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 09:00:31.907286       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:00:31.907512       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:00:31.909127       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:31.909268       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:00:31.909307       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:00:31.915272       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 09:00:31.915495       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:00:31.931309       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:00:31.931296       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 09:00:31.931331       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:00:31.931353       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:00:31.934344       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:00:31.939310       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:00:31.939334       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:00:31.939342       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:00:31.943864       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:00:31.945065       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:31.951733       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:00:31.959338       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 09:00:31.971510       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:00:31.974696       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:00:31.976264       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:00:31.979358       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [e796016163ed388aa9e995b7fbf568cb73db2071a880fa33c913e398ff464229] <==
	I1123 09:00:30.921395       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:00:31.327134       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:00:31.436412       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:00:31.436445       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 09:00:31.436511       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:00:31.483636       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:00:31.483690       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:00:31.516055       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:00:31.523714       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:00:31.531375       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:31.572347       1 config.go:200] "Starting service config controller"
	I1123 09:00:31.572377       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:00:31.572403       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:00:31.572407       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:00:31.572422       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:00:31.572426       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:00:31.583788       1 config.go:309] "Starting node config controller"
	I1123 09:00:31.583852       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:00:31.583882       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:00:31.672669       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:00:31.672702       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:00:31.672750       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [84f5d17f9123dfb226e15a389bc9a5e5b2de8b259f1186f86f2f3673b2895055] <==
	I1123 09:00:23.086868       1 serving.go:386] Generated self-signed cert in-memory
	W1123 09:00:28.307973       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 09:00:28.308015       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:00:28.308025       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 09:00:28.308033       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 09:00:28.505066       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:00:28.505097       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:28.608773       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:00:28.608929       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:28.608949       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:28.608965       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:00:28.716236       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:00:32 no-preload-591175 kubelet[780]: I1123 09:00:32.766479     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-284vw\" (UniqueName: \"kubernetes.io/projected/362aa06f-c276-4d53-b60f-02c2feed6668-kube-api-access-284vw\") pod \"kubernetes-dashboard-855c9754f9-pjsjj\" (UID: \"362aa06f-c276-4d53-b60f-02c2feed6668\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pjsjj"
	Nov 23 09:00:32 no-preload-591175 kubelet[780]: I1123 09:00:32.766788     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/362aa06f-c276-4d53-b60f-02c2feed6668-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-pjsjj\" (UID: \"362aa06f-c276-4d53-b60f-02c2feed6668\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pjsjj"
	Nov 23 09:00:32 no-preload-591175 kubelet[780]: I1123 09:00:32.766852     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kr7g\" (UniqueName: \"kubernetes.io/projected/8e169862-196e-4afb-ad57-199e564f44e3-kube-api-access-8kr7g\") pod \"dashboard-metrics-scraper-6ffb444bf9-tgnd2\" (UID: \"8e169862-196e-4afb-ad57-199e564f44e3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2"
	Nov 23 09:00:32 no-preload-591175 kubelet[780]: I1123 09:00:32.766884     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8e169862-196e-4afb-ad57-199e564f44e3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-tgnd2\" (UID: \"8e169862-196e-4afb-ad57-199e564f44e3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2"
	Nov 23 09:00:33 no-preload-591175 kubelet[780]: W1123 09:00:33.068815     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/14f3744363b876e7e01d62b25abaaf582fe456d1f9eb4abc90ea5abb2108d369/crio-8859b9c0184a85fa870afe83654c40ea3f6cddae4430369570eeccef4c2fb4a1 WatchSource:0}: Error finding container 8859b9c0184a85fa870afe83654c40ea3f6cddae4430369570eeccef4c2fb4a1: Status 404 returned error can't find the container with id 8859b9c0184a85fa870afe83654c40ea3f6cddae4430369570eeccef4c2fb4a1
	Nov 23 09:00:37 no-preload-591175 kubelet[780]: I1123 09:00:37.177297     780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 09:00:45 no-preload-591175 kubelet[780]: I1123 09:00:45.471892     780 scope.go:117] "RemoveContainer" containerID="c662e068a81fc6266452673e647d3a289feb6f32478889e431767e0380d2b133"
	Nov 23 09:00:45 no-preload-591175 kubelet[780]: I1123 09:00:45.508055     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pjsjj" podStartSLOduration=7.808804731 podStartE2EDuration="13.508038948s" podCreationTimestamp="2025-11-23 09:00:32 +0000 UTC" firstStartedPulling="2025-11-23 09:00:32.988145058 +0000 UTC m=+13.015821933" lastFinishedPulling="2025-11-23 09:00:38.68737925 +0000 UTC m=+18.715056150" observedRunningTime="2025-11-23 09:00:39.472603118 +0000 UTC m=+19.500280009" watchObservedRunningTime="2025-11-23 09:00:45.508038948 +0000 UTC m=+25.535715823"
	Nov 23 09:00:46 no-preload-591175 kubelet[780]: I1123 09:00:46.476280     780 scope.go:117] "RemoveContainer" containerID="c662e068a81fc6266452673e647d3a289feb6f32478889e431767e0380d2b133"
	Nov 23 09:00:46 no-preload-591175 kubelet[780]: I1123 09:00:46.477146     780 scope.go:117] "RemoveContainer" containerID="536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430"
	Nov 23 09:00:46 no-preload-591175 kubelet[780]: E1123 09:00:46.478329     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tgnd2_kubernetes-dashboard(8e169862-196e-4afb-ad57-199e564f44e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2" podUID="8e169862-196e-4afb-ad57-199e564f44e3"
	Nov 23 09:00:47 no-preload-591175 kubelet[780]: I1123 09:00:47.481268     780 scope.go:117] "RemoveContainer" containerID="536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430"
	Nov 23 09:00:47 no-preload-591175 kubelet[780]: E1123 09:00:47.482324     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tgnd2_kubernetes-dashboard(8e169862-196e-4afb-ad57-199e564f44e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2" podUID="8e169862-196e-4afb-ad57-199e564f44e3"
	Nov 23 09:00:53 no-preload-591175 kubelet[780]: I1123 09:00:53.002389     780 scope.go:117] "RemoveContainer" containerID="536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430"
	Nov 23 09:00:53 no-preload-591175 kubelet[780]: E1123 09:00:53.003277     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tgnd2_kubernetes-dashboard(8e169862-196e-4afb-ad57-199e564f44e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2" podUID="8e169862-196e-4afb-ad57-199e564f44e3"
	Nov 23 09:01:01 no-preload-591175 kubelet[780]: I1123 09:01:01.518896     780 scope.go:117] "RemoveContainer" containerID="95e135f4cdc6c76c534ff22368e28377e25b471ed736c58f590eae564658328b"
	Nov 23 09:01:04 no-preload-591175 kubelet[780]: I1123 09:01:04.276653     780 scope.go:117] "RemoveContainer" containerID="536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430"
	Nov 23 09:01:04 no-preload-591175 kubelet[780]: I1123 09:01:04.530228     780 scope.go:117] "RemoveContainer" containerID="536172c59fa20a930a878a233c75d78252f4246c0ff999fcfe6c1cab43582430"
	Nov 23 09:01:04 no-preload-591175 kubelet[780]: I1123 09:01:04.530510     780 scope.go:117] "RemoveContainer" containerID="0f7817fdeccec014e739b087663b5baa22386f19c93acc1e9b1b90b8b9eea98b"
	Nov 23 09:01:04 no-preload-591175 kubelet[780]: E1123 09:01:04.530644     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tgnd2_kubernetes-dashboard(8e169862-196e-4afb-ad57-199e564f44e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2" podUID="8e169862-196e-4afb-ad57-199e564f44e3"
	Nov 23 09:01:13 no-preload-591175 kubelet[780]: I1123 09:01:13.001592     780 scope.go:117] "RemoveContainer" containerID="0f7817fdeccec014e739b087663b5baa22386f19c93acc1e9b1b90b8b9eea98b"
	Nov 23 09:01:13 no-preload-591175 kubelet[780]: E1123 09:01:13.001810     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tgnd2_kubernetes-dashboard(8e169862-196e-4afb-ad57-199e564f44e3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tgnd2" podUID="8e169862-196e-4afb-ad57-199e564f44e3"
	Nov 23 09:01:21 no-preload-591175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:01:21 no-preload-591175 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:01:21 no-preload-591175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [74106f0c2a309342ef590081bd9557bf94fa83268eb9ee5ec4d761dd9cb1c240] <==
	2025/11/23 09:00:38 Using namespace: kubernetes-dashboard
	2025/11/23 09:00:38 Using in-cluster config to connect to apiserver
	2025/11/23 09:00:38 Using secret token for csrf signing
	2025/11/23 09:00:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 09:00:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 09:00:38 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 09:00:38 Generating JWE encryption key
	2025/11/23 09:00:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 09:00:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 09:00:40 Initializing JWE encryption key from synchronized object
	2025/11/23 09:00:40 Creating in-cluster Sidecar client
	2025/11/23 09:00:40 Serving insecurely on HTTP port: 9090
	2025/11/23 09:00:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:01:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:00:38 Starting overwatch
	
	
	==> storage-provisioner [95e135f4cdc6c76c534ff22368e28377e25b471ed736c58f590eae564658328b] <==
	I1123 09:00:30.858439       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 09:01:00.864725       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fbdf54514881b66bf257ad9c157d36d9d46f3a29c186b01a6f64ee63c4de43fb] <==
	I1123 09:01:01.635271       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:01:01.709398       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:01:01.710379       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:01:01.717447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:05.182719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:09.455466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:13.055388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:16.109273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:19.131206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:19.140021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:01:19.140219       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:01:19.141103       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-591175_3506b145-c0e5-4893-b3bf-6f8d18292f33!
	I1123 09:01:19.145982       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"600e7609-78b8-477b-9429-5d86b624370f", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-591175_3506b145-c0e5-4893-b3bf-6f8d18292f33 became leader
	W1123 09:01:19.148174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:19.151855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:01:19.242308       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-591175_3506b145-c0e5-4893-b3bf-6f8d18292f33!
	W1123 09:01:21.154532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:21.164555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:23.167573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:23.172660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:25.178421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:25.187299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-591175 -n no-preload-591175
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-591175 -n no-preload-591175: exit status 2 (372.020386ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-591175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.35s)
E1123 09:06:54.914712 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:06:56.995333 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:07:00.036496 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:07:10.280758 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:07:24.702571 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (261/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.88
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 7.5
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.1
18 TestDownloadOnly/v1.34.1/DeleteAll 0.24
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 166.83
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.96
48 TestAddons/StoppedEnableDisable 12.45
49 TestCertOptions 42.36
50 TestCertExpiration 246.34
52 TestForceSystemdFlag 39.07
53 TestForceSystemdEnv 39.57
58 TestErrorSpam/setup 31.52
59 TestErrorSpam/start 0.76
60 TestErrorSpam/status 1.13
61 TestErrorSpam/pause 6.74
62 TestErrorSpam/unpause 6.21
63 TestErrorSpam/stop 1.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 79.12
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.61
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.55
75 TestFunctional/serial/CacheCmd/cache/add_local 1.1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 39.23
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.42
86 TestFunctional/serial/LogsFileCmd 1.46
87 TestFunctional/serial/InvalidService 4.08
89 TestFunctional/parallel/ConfigCmd 0.5
90 TestFunctional/parallel/DashboardCmd 7.54
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.04
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 28.6
101 TestFunctional/parallel/SSHCmd 0.8
102 TestFunctional/parallel/CpCmd 1.66
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.71
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.79
113 TestFunctional/parallel/License 0.29
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
116 TestFunctional/parallel/Version/short 0.06
117 TestFunctional/parallel/Version/components 0.98
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.48
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
125 TestFunctional/parallel/ImageCommands/ImageBuild 3.94
126 TestFunctional/parallel/ImageCommands/Setup 0.65
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/MountCmd/any-port 8.48
144 TestFunctional/parallel/MountCmd/specific-port 2.45
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.79
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
148 TestFunctional/parallel/ProfileCmd/profile_list 0.43
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
150 TestFunctional/parallel/ServiceCmd/List 1.31
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.4
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 199.11
163 TestMultiControlPlane/serial/DeployApp 6.58
164 TestMultiControlPlane/serial/PingHostFromPods 1.47
165 TestMultiControlPlane/serial/AddWorkerNode 30.68
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.18
168 TestMultiControlPlane/serial/CopyFile 19.88
169 TestMultiControlPlane/serial/StopSecondaryNode 12.84
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
171 TestMultiControlPlane/serial/RestartSecondaryNode 29.8
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.42
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 144.27
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.67
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
176 TestMultiControlPlane/serial/StopCluster 36.08
177 TestMultiControlPlane/serial/RestartCluster 93.89
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
179 TestMultiControlPlane/serial/AddSecondaryNode 55.67
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
185 TestJSONOutput/start/Command 79.17
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.87
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 44.81
211 TestKicCustomNetwork/use_default_bridge_network 38.81
212 TestKicExistingNetwork 36.33
213 TestKicCustomSubnet 33.74
214 TestKicStaticIP 37.01
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 69.89
219 TestMountStart/serial/StartWithMountFirst 8.97
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.62
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 8.13
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 102.53
231 TestMultiNode/serial/DeployApp2Nodes 4.71
232 TestMultiNode/serial/PingHostFrom2Pods 0.91
233 TestMultiNode/serial/AddNode 58.5
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.73
236 TestMultiNode/serial/CopyFile 10.42
237 TestMultiNode/serial/StopNode 2.36
238 TestMultiNode/serial/StartAfterStop 7.89
239 TestMultiNode/serial/RestartKeepsNodes 80.39
240 TestMultiNode/serial/DeleteNode 5.72
241 TestMultiNode/serial/StopMultiNode 23.98
242 TestMultiNode/serial/RestartMultiNode 49.44
243 TestMultiNode/serial/ValidateNameConflict 36.13
248 TestPreload 126.86
250 TestScheduledStopUnix 110.69
253 TestInsufficientStorage 13.35
254 TestRunningBinaryUpgrade 53.41
256 TestKubernetesUpgrade 364.2
257 TestMissingContainerUpgrade 119.57
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 45.83
261 TestNoKubernetes/serial/StartWithStopK8s 7.62
262 TestNoKubernetes/serial/Start 9.25
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
265 TestNoKubernetes/serial/ProfileList 1.64
266 TestNoKubernetes/serial/Stop 1.39
267 TestNoKubernetes/serial/StartNoArgs 7.16
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
269 TestStoppedBinaryUpgrade/Setup 1.19
270 TestStoppedBinaryUpgrade/Upgrade 57.79
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.14
280 TestPause/serial/Start 80.36
281 TestPause/serial/SecondStartNoReconfiguration 26.83
290 TestNetworkPlugins/group/false 5
295 TestStartStop/group/old-k8s-version/serial/FirstStart 63.92
296 TestStartStop/group/old-k8s-version/serial/DeployApp 8.46
298 TestStartStop/group/old-k8s-version/serial/Stop 12
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
300 TestStartStop/group/old-k8s-version/serial/SecondStart 46.92
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.79
308 TestStartStop/group/embed-certs/serial/FirstStart 83.28
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
311 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.01
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
313 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.22
314 TestStartStop/group/embed-certs/serial/DeployApp 9.36
316 TestStartStop/group/embed-certs/serial/Stop 12.07
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
318 TestStartStop/group/embed-certs/serial/SecondStart 53.71
319 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
320 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
321 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
324 TestStartStop/group/no-preload/serial/FirstStart 68.68
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.02
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
330 TestStartStop/group/newest-cni/serial/FirstStart 37.89
331 TestStartStop/group/no-preload/serial/DeployApp 10.45
332 TestStartStop/group/newest-cni/serial/DeployApp 0
335 TestStartStop/group/newest-cni/serial/Stop 1.49
336 TestStartStop/group/no-preload/serial/Stop 12.66
337 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
338 TestStartStop/group/newest-cni/serial/SecondStart 15.81
339 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
340 TestStartStop/group/no-preload/serial/SecondStart 57.21
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
345 TestNetworkPlugins/group/auto/Start 81.97
346 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
347 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
348 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
350 TestNetworkPlugins/group/kindnet/Start 80.89
351 TestNetworkPlugins/group/auto/KubeletFlags 0.39
352 TestNetworkPlugins/group/auto/NetCatPod 11.36
353 TestNetworkPlugins/group/auto/DNS 0.2
354 TestNetworkPlugins/group/auto/Localhost 0.16
355 TestNetworkPlugins/group/auto/HairPin 0.17
356 TestNetworkPlugins/group/calico/Start 59.26
357 TestNetworkPlugins/group/kindnet/ControllerPod 6
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
359 TestNetworkPlugins/group/kindnet/NetCatPod 12.47
360 TestNetworkPlugins/group/kindnet/DNS 0.25
361 TestNetworkPlugins/group/kindnet/Localhost 0.24
362 TestNetworkPlugins/group/kindnet/HairPin 0.17
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.42
365 TestNetworkPlugins/group/calico/NetCatPod 11.39
366 TestNetworkPlugins/group/custom-flannel/Start 63.6
367 TestNetworkPlugins/group/calico/DNS 0.2
368 TestNetworkPlugins/group/calico/Localhost 0.18
369 TestNetworkPlugins/group/calico/HairPin 0.16
370 TestNetworkPlugins/group/enable-default-cni/Start 83.83
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.42
373 TestNetworkPlugins/group/custom-flannel/DNS 0.18
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
376 TestNetworkPlugins/group/flannel/Start 56
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.33
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
382 TestNetworkPlugins/group/bridge/Start 81.41
383 TestNetworkPlugins/group/flannel/ControllerPod 6
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
385 TestNetworkPlugins/group/flannel/NetCatPod 12.36
386 TestNetworkPlugins/group/flannel/DNS 0.2
387 TestNetworkPlugins/group/flannel/Localhost 0.14
388 TestNetworkPlugins/group/flannel/HairPin 0.15
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
390 TestNetworkPlugins/group/bridge/NetCatPod 10.26
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.13
393 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (6.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-833751 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-833751 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.882186951s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 07:55:53.221516 1043159 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1123 07:55:53.221596 1043159 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-833751
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-833751: exit status 85 (93.178999ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-833751 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-833751 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 07:55:46
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 07:55:46.384001 1043164 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:55:46.384134 1043164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:46.384145 1043164 out.go:374] Setting ErrFile to fd 2...
	I1123 07:55:46.384151 1043164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:46.384482 1043164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	W1123 07:55:46.385143 1043164 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21966-1041293/.minikube/config/config.json: open /home/jenkins/minikube-integration/21966-1041293/.minikube/config/config.json: no such file or directory
	I1123 07:55:46.385623 1043164 out.go:368] Setting JSON to true
	I1123 07:55:46.386492 1043164 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":31092,"bootTime":1763853455,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 07:55:46.386590 1043164 start.go:143] virtualization:  
	I1123 07:55:46.391839 1043164 out.go:99] [download-only-833751] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1123 07:55:46.392002 1043164 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 07:55:46.392078 1043164 notify.go:221] Checking for updates...
	I1123 07:55:46.395650 1043164 out.go:171] MINIKUBE_LOCATION=21966
	I1123 07:55:46.399159 1043164 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 07:55:46.402521 1043164 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 07:55:46.405727 1043164 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 07:55:46.408847 1043164 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1123 07:55:46.414781 1043164 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 07:55:46.415064 1043164 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 07:55:46.442297 1043164 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 07:55:46.442397 1043164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:46.497353 1043164 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-23 07:55:46.488416824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 07:55:46.497459 1043164 docker.go:319] overlay module found
	I1123 07:55:46.500615 1043164 out.go:99] Using the docker driver based on user configuration
	I1123 07:55:46.500654 1043164 start.go:309] selected driver: docker
	I1123 07:55:46.500673 1043164 start.go:927] validating driver "docker" against <nil>
	I1123 07:55:46.500780 1043164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:46.554729 1043164 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-23 07:55:46.54607262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 07:55:46.554887 1043164 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 07:55:46.555147 1043164 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1123 07:55:46.555350 1043164 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 07:55:46.558615 1043164 out.go:171] Using Docker driver with root privileges
	I1123 07:55:46.561561 1043164 cni.go:84] Creating CNI manager for ""
	I1123 07:55:46.561637 1043164 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:55:46.561650 1043164 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 07:55:46.561728 1043164 start.go:353] cluster config:
	{Name:download-only-833751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-833751 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 07:55:46.564698 1043164 out.go:99] Starting "download-only-833751" primary control-plane node in "download-only-833751" cluster
	I1123 07:55:46.564718 1043164 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 07:55:46.567607 1043164 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 07:55:46.567643 1043164 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 07:55:46.567798 1043164 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 07:55:46.582648 1043164 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 07:55:46.582849 1043164 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 07:55:46.582946 1043164 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 07:55:46.628993 1043164 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 07:55:46.629019 1043164 cache.go:65] Caching tarball of preloaded images
	I1123 07:55:46.629184 1043164 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 07:55:46.632590 1043164 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1123 07:55:46.632618 1043164 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1123 07:55:46.726176 1043164 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1123 07:55:46.726329 1043164 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 07:55:51.287677 1043164 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	
	
	* The control-plane node download-only-833751 host does not exist
	  To start a cluster, run: "minikube start -p download-only-833751"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-833751
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (7.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-540328 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-540328 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.503185339s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (7.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 07:56:01.157096 1043159 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1123 07:56:01.157131 1043159 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-540328
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-540328: exit status 85 (95.846999ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-833751 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-833751 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ delete  │ -p download-only-833751                                                                                                                                                   │ download-only-833751 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ start   │ -o=json --download-only -p download-only-540328 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-540328 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 07:55:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 07:55:53.693861 1043366 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:55:53.694007 1043366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:53.694033 1043366 out.go:374] Setting ErrFile to fd 2...
	I1123 07:55:53.694056 1043366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:53.694314 1043366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 07:55:53.694758 1043366 out.go:368] Setting JSON to true
	I1123 07:55:53.695598 1043366 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":31099,"bootTime":1763853455,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 07:55:53.695662 1043366 start.go:143] virtualization:  
	I1123 07:55:53.698969 1043366 out.go:99] [download-only-540328] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 07:55:53.699230 1043366 notify.go:221] Checking for updates...
	I1123 07:55:53.703096 1043366 out.go:171] MINIKUBE_LOCATION=21966
	I1123 07:55:53.706006 1043366 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 07:55:53.708913 1043366 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 07:55:53.711808 1043366 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 07:55:53.714754 1043366 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1123 07:55:53.720360 1043366 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 07:55:53.720640 1043366 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 07:55:53.748750 1043366 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 07:55:53.748864 1043366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:53.804051 1043366 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-11-23 07:55:53.795480755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 07:55:53.804157 1043366 docker.go:319] overlay module found
	I1123 07:55:53.807244 1043366 out.go:99] Using the docker driver based on user configuration
	I1123 07:55:53.807287 1043366 start.go:309] selected driver: docker
	I1123 07:55:53.807294 1043366 start.go:927] validating driver "docker" against <nil>
	I1123 07:55:53.807392 1043366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:53.859919 1043366 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-11-23 07:55:53.851475881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 07:55:53.860071 1043366 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 07:55:53.860335 1043366 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1123 07:55:53.860496 1043366 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 07:55:53.863580 1043366 out.go:171] Using Docker driver with root privileges
	I1123 07:55:53.866389 1043366 cni.go:84] Creating CNI manager for ""
	I1123 07:55:53.866457 1043366 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:55:53.866470 1043366 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 07:55:53.866540 1043366 start.go:353] cluster config:
	{Name:download-only-540328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-540328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 07:55:53.869549 1043366 out.go:99] Starting "download-only-540328" primary control-plane node in "download-only-540328" cluster
	I1123 07:55:53.869567 1043366 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 07:55:53.872331 1043366 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 07:55:53.872381 1043366 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:55:53.872463 1043366 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 07:55:53.890013 1043366 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 07:55:53.890140 1043366 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 07:55:53.890159 1043366 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 07:55:53.890163 1043366 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 07:55:53.890170 1043366 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 07:55:53.927589 1043366 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 07:55:53.927613 1043366 cache.go:65] Caching tarball of preloaded images
	I1123 07:55:53.927791 1043366 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:55:53.930899 1043366 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1123 07:55:53.930924 1043366 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1123 07:55:54.012411 1043366 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1123 07:55:54.012469 1043366 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-540328 host does not exist
	  To start a cluster, run: "minikube start -p download-only-540328"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-540328
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 07:56:02.368237 1043159 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-804601 --alsologtostderr --binary-mirror http://127.0.0.1:32857 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-804601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-804601
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-782760
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-782760: exit status 85 (79.721233ms)

                                                
                                                
-- stdout --
	* Profile "addons-782760" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-782760"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-782760
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-782760: exit status 85 (93.914288ms)

                                                
                                                
-- stdout --
	* Profile "addons-782760" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-782760"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (166.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-782760 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-782760 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m46.82618945s)
--- PASS: TestAddons/Setup (166.83s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-782760 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-782760 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.96s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-782760 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-782760 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b5630b0a-672e-43dd-a075-33e50a0753f8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b5630b0a-672e-43dd-a075-33e50a0753f8] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003494071s
addons_test.go:694: (dbg) Run:  kubectl --context addons-782760 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-782760 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-782760 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-782760 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.96s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-782760
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-782760: (12.166532066s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-782760
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-782760
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-782760
--- PASS: TestAddons/StoppedEnableDisable (12.45s)

                                                
                                    
x
+
TestCertOptions (42.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-194318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-194318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (39.485131475s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-194318 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-194318 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-194318 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-194318" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-194318
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-194318: (2.119789105s)
--- PASS: TestCertOptions (42.36s)

                                                
                                    
x
+
TestCertExpiration (246.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-322507 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-322507 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.887806216s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-322507 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1123 08:55:56.067329 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-322507 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (21.639543268s)
helpers_test.go:175: Cleaning up "cert-expiration-322507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-322507
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-322507: (2.809936812s)
--- PASS: TestCertExpiration (246.34s)

                                                
                                    
x
+
TestForceSystemdFlag (39.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-721521 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1123 08:50:56.067908 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-721521 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.714061886s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-721521 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-721521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-721521
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-721521: (2.959708029s)
--- PASS: TestForceSystemdFlag (39.07s)

                                                
                                    
x
+
TestForceSystemdEnv (39.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-498438 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-498438 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.264266706s)
helpers_test.go:175: Cleaning up "force-systemd-env-498438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-498438
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-498438: (3.309149575s)
--- PASS: TestForceSystemdEnv (39.57s)

                                                
                                    
x
+
TestErrorSpam/setup (31.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-811206 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-811206 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-811206 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-811206 --driver=docker  --container-runtime=crio: (31.516300038s)
--- PASS: TestErrorSpam/setup (31.52s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (6.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 pause: exit status 80 (2.315408652s)

                                                
                                                
-- stdout --
	* Pausing node nospam-811206 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:02:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 pause: exit status 80 (1.665031837s)

                                                
                                                
-- stdout --
	* Pausing node nospam-811206 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:02:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 pause: exit status 80 (2.754747719s)

                                                
                                                
-- stdout --
	* Pausing node nospam-811206 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:03:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.21s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 unpause: exit status 80 (1.957150762s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-811206 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:03:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 unpause: exit status 80 (2.176002464s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-811206 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:03:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 unpause: exit status 80 (2.077017446s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-811206 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:03:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.21s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 stop: (1.30689464s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-811206 --log_dir /tmp/nospam-811206 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21966-1041293/.minikube/files/etc/test/nested/copy/1043159/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-333688 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1123 08:03:50.643130 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:50.649540 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:50.660985 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:50.682422 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:50.723927 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:50.805327 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:50.966751 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:51.288403 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:51.930402 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:53.212056 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:55.774008 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:04:00.896037 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:04:11.139094 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:04:31.620640 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-333688 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.117988349s)
--- PASS: TestFunctional/serial/StartWithProxy (79.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 08:04:31.759654 1043159 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-333688 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-333688 --alsologtostderr -v=8: (29.603244594s)
functional_test.go:678: soft start took 29.604975358s for "functional-333688" cluster.
I1123 08:05:01.363237 1043159 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.61s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-333688 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-333688 cache add registry.k8s.io/pause:3.1: (1.289485313s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-333688 cache add registry.k8s.io/pause:3.3: (1.141960039s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-333688 cache add registry.k8s.io/pause:latest: (1.122091176s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-333688 /tmp/TestFunctionalserialCacheCmdcacheadd_local419035042/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 cache add minikube-local-cache-test:functional-333688
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 cache delete minikube-local-cache-test:functional-333688
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-333688
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-333688 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.853738ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 kubectl -- --context functional-333688 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-333688 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-333688 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1123 08:05:12.583239 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-333688 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.223090976s)
functional_test.go:776: restart took 39.223226956s for "functional-333688" cluster.
I1123 08:05:48.067439 1043159 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (39.23s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-333688 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-333688 logs: (1.419143603s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 logs --file /tmp/TestFunctionalserialLogsFileCmd1534184148/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-333688 logs --file /tmp/TestFunctionalserialLogsFileCmd1534184148/001/logs.txt: (1.455914481s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.08s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-333688 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-333688
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-333688: exit status 115 (393.360853ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31613 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-333688 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-333688 config get cpus: exit status 14 (59.443683ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-333688 config get cpus: exit status 14 (138.565941ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-333688 --alsologtostderr -v=1]
2025/11/23 08:16:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-333688 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 1070303: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-333688 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-333688 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (189.933274ms)

                                                
                                                
-- stdout --
	* [functional-333688] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:16:24.472360 1069783 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:16:24.472663 1069783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:16:24.472693 1069783 out.go:374] Setting ErrFile to fd 2...
	I1123 08:16:24.472714 1069783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:16:24.473159 1069783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:16:24.473641 1069783 out.go:368] Setting JSON to false
	I1123 08:16:24.474627 1069783 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32330,"bootTime":1763853455,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:16:24.474732 1069783 start.go:143] virtualization:  
	I1123 08:16:24.480079 1069783 out.go:179] * [functional-333688] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:16:24.483373 1069783 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:16:24.483568 1069783 notify.go:221] Checking for updates...
	I1123 08:16:24.488881 1069783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:16:24.491781 1069783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:16:24.494535 1069783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:16:24.497325 1069783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:16:24.500073 1069783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:16:24.503397 1069783 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:16:24.504038 1069783 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:16:24.536610 1069783 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:16:24.536714 1069783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:16:24.596820 1069783 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:16:24.586423241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:16:24.596935 1069783 docker.go:319] overlay module found
	I1123 08:16:24.600197 1069783 out.go:179] * Using the docker driver based on existing profile
	I1123 08:16:24.603058 1069783 start.go:309] selected driver: docker
	I1123 08:16:24.603073 1069783 start.go:927] validating driver "docker" against &{Name:functional-333688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-333688 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:16:24.603166 1069783 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:16:24.606968 1069783 out.go:203] 
	W1123 08:16:24.609784 1069783 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 08:16:24.612651 1069783 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-333688 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-333688 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-333688 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (191.155052ms)

                                                
                                                
-- stdout --
	* [functional-333688] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:16:25.962830 1070125 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:16:25.962964 1070125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:16:25.962975 1070125 out.go:374] Setting ErrFile to fd 2...
	I1123 08:16:25.962980 1070125 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:16:25.963394 1070125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:16:25.963749 1070125 out.go:368] Setting JSON to false
	I1123 08:16:25.964625 1070125 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32331,"bootTime":1763853455,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:16:25.964688 1070125 start.go:143] virtualization:  
	I1123 08:16:25.967977 1070125 out.go:179] * [functional-333688] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1123 08:16:25.971024 1070125 notify.go:221] Checking for updates...
	I1123 08:16:25.971095 1070125 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:16:25.973984 1070125 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:16:25.976860 1070125 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:16:25.979645 1070125 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:16:25.982555 1070125 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:16:25.985832 1070125 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:16:25.989115 1070125 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:16:25.989717 1070125 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:16:26.015116 1070125 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:16:26.015270 1070125 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:16:26.079090 1070125 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:16:26.066587142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:16:26.079257 1070125 docker.go:319] overlay module found
	I1123 08:16:26.082292 1070125 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1123 08:16:26.085062 1070125 start.go:309] selected driver: docker
	I1123 08:16:26.085084 1070125 start.go:927] validating driver "docker" against &{Name:functional-333688 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-333688 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:16:26.085195 1070125 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:16:26.088552 1070125 out.go:203] 
	W1123 08:16:26.091349 1070125 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 08:16:26.094078 1070125 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [5a59492a-a4e6-45f9-81fd-5e93313ded3b] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003860022s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-333688 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-333688 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-333688 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-333688 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7c975f0b-2319-40b8-8289-bdac5787b1e8] Pending
helpers_test.go:352: "sp-pod" [7c975f0b-2319-40b8-8289-bdac5787b1e8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7c975f0b-2319-40b8-8289-bdac5787b1e8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003712292s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-333688 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-333688 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-333688 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1799b2bd-6014-485b-be82-eb042015717a] Pending
helpers_test.go:352: "sp-pod" [1799b2bd-6014-485b-be82-eb042015717a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [1799b2bd-6014-485b-be82-eb042015717a] Running
E1123 08:06:34.504700 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003244807s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-333688 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh -n functional-333688 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 cp functional-333688:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2887065173/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh -n functional-333688 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh -n functional-333688 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1043159/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "sudo cat /etc/test/nested/copy/1043159/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1043159.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "sudo cat /etc/ssl/certs/1043159.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1043159.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "sudo cat /usr/share/ca-certificates/1043159.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/10431592.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "sudo cat /etc/ssl/certs/10431592.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/10431592.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "sudo cat /usr/share/ca-certificates/10431592.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-333688 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-333688 ssh "sudo systemctl is-active docker": exit status 1 (399.507647ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-333688 ssh "sudo systemctl is-active containerd": exit status 1 (392.443668ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-333688 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-333688 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-333688 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-333688 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1065072: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-333688 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-333688 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [888f0493-89ce-441a-a1ac-052a21331578] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [888f0493-89ce-441a-a1ac-052a21331578] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003689943s
I1123 08:06:06.549176 1043159 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-333688 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-333688 image ls --format short --alsologtostderr:
I1123 08:16:34.733655 1070552 out.go:360] Setting OutFile to fd 1 ...
I1123 08:16:34.733780 1070552 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:16:34.733798 1070552 out.go:374] Setting ErrFile to fd 2...
I1123 08:16:34.733804 1070552 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:16:34.734045 1070552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
I1123 08:16:34.734666 1070552 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:16:34.734793 1070552 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:16:34.735347 1070552 cli_runner.go:164] Run: docker container inspect functional-333688 --format={{.State.Status}}
I1123 08:16:34.751833 1070552 ssh_runner.go:195] Run: systemctl --version
I1123 08:16:34.751896 1070552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
I1123 08:16:34.770231 1070552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34237 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/functional-333688/id_rsa Username:docker}
I1123 08:16:34.874178 1070552 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-333688 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ localhost/my-image                      │ functional-333688  │ 2a84be7ce7629 │ 1.64MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-333688 image ls --format table --alsologtostderr:
I1123 08:16:39.403842 1071252 out.go:360] Setting OutFile to fd 1 ...
I1123 08:16:39.404070 1071252 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:16:39.404100 1071252 out.go:374] Setting ErrFile to fd 2...
I1123 08:16:39.404122 1071252 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:16:39.404422 1071252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
I1123 08:16:39.405121 1071252 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:16:39.405299 1071252 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:16:39.405877 1071252 cli_runner.go:164] Run: docker container inspect functional-333688 --format={{.State.Status}}
I1123 08:16:39.442506 1071252 ssh_runner.go:195] Run: systemctl --version
I1123 08:16:39.442557 1071252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
I1123 08:16:39.463586 1071252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34237 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/functional-333688/id_rsa Username:docker}
I1123 08:16:39.585939 1071252 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-333688 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags"
:["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-m
inikube/storage-provisioner:v5"],"size":"29037500"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8
s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304
a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa
9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-333688 image ls --format json --alsologtostderr:
I1123 08:16:34.958206 1070588 out.go:360] Setting OutFile to fd 1 ...
I1123 08:16:34.958350 1070588 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:16:34.958373 1070588 out.go:374] Setting ErrFile to fd 2...
I1123 08:16:34.958381 1070588 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:16:34.958656 1070588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
I1123 08:16:34.959294 1070588 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:16:34.959457 1070588 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:16:34.960017 1070588 cli_runner.go:164] Run: docker container inspect functional-333688 --format={{.State.Status}}
I1123 08:16:34.981466 1070588 ssh_runner.go:195] Run: systemctl --version
I1123 08:16:34.981525 1070588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
I1123 08:16:35.005228 1070588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34237 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/functional-333688/id_rsa Username:docker}
I1123 08:16:35.110237 1070588 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-333688 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba72fa25de5d527bfac380cfef06dd1b9626293fab954014c71f49d7a80e889e
repoDigests:
- docker.io/library/962d3befcac7283c36fcfe575d1d720f7c02604155b78d4a46fc5a258a27e3b6-tmp@sha256:c4fc09dd8e972727c411977408260dda16f3471a8bc55290891f3879e7ac1367
repoTags: []
size: "1638179"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1634527"
- id: 2a84be7ce7629d6ab1cb1f28a14f93b7c1913786a291966fc09a28de527b03ad
repoDigests:
- localhost/my-image@sha256:fef78e048ab962e77a9711e0685ec8d7a44cb24845b20bdf45ae59f933c1fb9e
repoTags:
- localhost/my-image:functional-333688
size: "1640791"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-333688 image ls --format yaml --alsologtostderr:
I1123 08:16:39.135525 1071211 out.go:360] Setting OutFile to fd 1 ...
I1123 08:16:39.135714 1071211 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:16:39.135739 1071211 out.go:374] Setting ErrFile to fd 2...
I1123 08:16:39.135759 1071211 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:16:39.136058 1071211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
I1123 08:16:39.136787 1071211 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:16:39.136958 1071211 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:16:39.137531 1071211 cli_runner.go:164] Run: docker container inspect functional-333688 --format={{.State.Status}}
I1123 08:16:39.154701 1071211 ssh_runner.go:195] Run: systemctl --version
I1123 08:16:39.154748 1071211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
I1123 08:16:39.172821 1071211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34237 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/functional-333688/id_rsa Username:docker}
I1123 08:16:39.277796 1071211 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-333688 ssh pgrep buildkitd: exit status 1 (264.946851ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image build -t localhost/my-image:functional-333688 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-333688 image build -t localhost/my-image:functional-333688 testdata/build --alsologtostderr: (3.442345304s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-333688 image build -t localhost/my-image:functional-333688 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ba72fa25de5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-333688
--> 2a84be7ce76
Successfully tagged localhost/my-image:functional-333688
2a84be7ce7629d6ab1cb1f28a14f93b7c1913786a291966fc09a28de527b03ad
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-333688 image build -t localhost/my-image:functional-333688 testdata/build --alsologtostderr:
I1123 08:16:35.471528 1070689 out.go:360] Setting OutFile to fd 1 ...
I1123 08:16:35.472657 1070689 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:16:35.472669 1070689 out.go:374] Setting ErrFile to fd 2...
I1123 08:16:35.472675 1070689 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:16:35.473014 1070689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
I1123 08:16:35.473938 1070689 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:16:35.477888 1070689 config.go:182] Loaded profile config "functional-333688": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:16:35.478608 1070689 cli_runner.go:164] Run: docker container inspect functional-333688 --format={{.State.Status}}
I1123 08:16:35.504881 1070689 ssh_runner.go:195] Run: systemctl --version
I1123 08:16:35.504947 1070689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333688
I1123 08:16:35.521784 1070689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34237 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/functional-333688/id_rsa Username:docker}
I1123 08:16:35.625636 1070689 build_images.go:162] Building image from path: /tmp/build.3095084345.tar
I1123 08:16:35.625727 1070689 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 08:16:35.633766 1070689 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3095084345.tar
I1123 08:16:35.637526 1070689 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3095084345.tar: stat -c "%s %y" /var/lib/minikube/build/build.3095084345.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3095084345.tar': No such file or directory
I1123 08:16:35.637566 1070689 ssh_runner.go:362] scp /tmp/build.3095084345.tar --> /var/lib/minikube/build/build.3095084345.tar (3072 bytes)
I1123 08:16:35.655036 1070689 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3095084345
I1123 08:16:35.666697 1070689 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3095084345 -xf /var/lib/minikube/build/build.3095084345.tar
I1123 08:16:35.675099 1070689 crio.go:315] Building image: /var/lib/minikube/build/build.3095084345
I1123 08:16:35.675220 1070689 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-333688 /var/lib/minikube/build/build.3095084345 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1123 08:16:38.826705 1070689 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-333688 /var/lib/minikube/build/build.3095084345 --cgroup-manager=cgroupfs: (3.151456297s)
I1123 08:16:38.826771 1070689 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3095084345
I1123 08:16:38.834759 1070689 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3095084345.tar
I1123 08:16:38.842271 1070689 build_images.go:218] Built localhost/my-image:functional-333688 from /tmp/build.3095084345.tar
I1123 08:16:38.842298 1070689 build_images.go:134] succeeded building to: functional-333688
I1123 08:16:38.842303 1070689 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-333688
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image rm kicbase/echo-server:functional-333688 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-333688 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.187.176 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-333688 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-333688 /tmp/TestFunctionalparallelMountCmdany-port1789193375/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763885166784446993" to /tmp/TestFunctionalparallelMountCmdany-port1789193375/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763885166784446993" to /tmp/TestFunctionalparallelMountCmdany-port1789193375/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763885166784446993" to /tmp/TestFunctionalparallelMountCmdany-port1789193375/001/test-1763885166784446993
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-333688 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (439.283429ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:06:07.223980 1043159 retry.go:31] will retry after 724.605512ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 08:06 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 08:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 08:06 test-1763885166784446993
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh cat /mount-9p/test-1763885166784446993
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-333688 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [6eedaef8-e3ef-4e37-b5f1-bf28122d7090] Pending
helpers_test.go:352: "busybox-mount" [6eedaef8-e3ef-4e37-b5f1-bf28122d7090] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [6eedaef8-e3ef-4e37-b5f1-bf28122d7090] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [6eedaef8-e3ef-4e37-b5f1-bf28122d7090] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003091778s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-333688 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-333688 /tmp/TestFunctionalparallelMountCmdany-port1789193375/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-333688 /tmp/TestFunctionalparallelMountCmdspecific-port3244396986/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-333688 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (548.733976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:06:15.816052 1043159 retry.go:31] will retry after 632.583359ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-333688 /tmp/TestFunctionalparallelMountCmdspecific-port3244396986/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-333688 ssh "sudo umount -f /mount-9p": exit status 1 (345.524584ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-333688 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-333688 /tmp/TestFunctionalparallelMountCmdspecific-port3244396986/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-333688 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4279472702/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-333688 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4279472702/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-333688 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4279472702/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-333688 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-333688 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4279472702/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-333688 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4279472702/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-333688 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4279472702/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "373.143794ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "56.315983ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "359.168628ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "51.167383ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-333688 service list: (1.308162352s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-333688 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-333688 service list -o json: (1.395396513s)
functional_test.go:1504: Took "1.395477585s" to run "out/minikube-linux-arm64 -p functional-333688 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-333688
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-333688
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-333688
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1123 08:18:50.642944 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m18.235336592s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (199.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 kubectl -- rollout status deployment/busybox: (3.862879563s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-2hsw7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-7khxp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-sxw88 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-2hsw7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-7khxp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-sxw88 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-2hsw7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-7khxp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-sxw88 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-2hsw7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-2hsw7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-7khxp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-7khxp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-sxw88 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 kubectl -- exec busybox-7b57f96db7-sxw88 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 node add --alsologtostderr -v 5
E1123 08:20:13.707990 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 node add --alsologtostderr -v 5: (29.625544474s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 status --alsologtostderr -v 5: (1.057727671s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-861906 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.182243235s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 status --output json --alsologtostderr -v 5: (1.014542179s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp testdata/cp-test.txt ha-861906:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1727125494/001/cp-test_ha-861906.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906:/home/docker/cp-test.txt ha-861906-m02:/home/docker/cp-test_ha-861906_ha-861906-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m02 "sudo cat /home/docker/cp-test_ha-861906_ha-861906-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906:/home/docker/cp-test.txt ha-861906-m03:/home/docker/cp-test_ha-861906_ha-861906-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m03 "sudo cat /home/docker/cp-test_ha-861906_ha-861906-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906:/home/docker/cp-test.txt ha-861906-m04:/home/docker/cp-test_ha-861906_ha-861906-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m04 "sudo cat /home/docker/cp-test_ha-861906_ha-861906-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp testdata/cp-test.txt ha-861906-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1727125494/001/cp-test_ha-861906-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906-m02:/home/docker/cp-test.txt ha-861906:/home/docker/cp-test_ha-861906-m02_ha-861906.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906 "sudo cat /home/docker/cp-test_ha-861906-m02_ha-861906.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906-m02:/home/docker/cp-test.txt ha-861906-m03:/home/docker/cp-test_ha-861906-m02_ha-861906-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m03 "sudo cat /home/docker/cp-test_ha-861906-m02_ha-861906-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906-m02:/home/docker/cp-test.txt ha-861906-m04:/home/docker/cp-test_ha-861906-m02_ha-861906-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m04 "sudo cat /home/docker/cp-test_ha-861906-m02_ha-861906-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp testdata/cp-test.txt ha-861906-m03:/home/docker/cp-test.txt
E1123 08:20:56.067601 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:20:56.073891 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:20:56.085256 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:20:56.106635 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m03 "sudo cat /home/docker/cp-test.txt"
E1123 08:20:56.148148 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:20:56.229660 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:20:56.391297 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1727125494/001/cp-test_ha-861906-m03.txt
E1123 08:20:56.712978 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906-m03:/home/docker/cp-test.txt ha-861906:/home/docker/cp-test_ha-861906-m03_ha-861906.txt
E1123 08:20:57.354217 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906 "sudo cat /home/docker/cp-test_ha-861906-m03_ha-861906.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906-m03:/home/docker/cp-test.txt ha-861906-m02:/home/docker/cp-test_ha-861906-m03_ha-861906-m02.txt
E1123 08:20:58.635965 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m02 "sudo cat /home/docker/cp-test_ha-861906-m03_ha-861906-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906-m03:/home/docker/cp-test.txt ha-861906-m04:/home/docker/cp-test_ha-861906-m03_ha-861906-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m04 "sudo cat /home/docker/cp-test_ha-861906-m03_ha-861906-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp testdata/cp-test.txt ha-861906-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m04 "sudo cat /home/docker/cp-test.txt"
E1123 08:21:01.198180 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1727125494/001/cp-test_ha-861906-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906-m04:/home/docker/cp-test.txt ha-861906:/home/docker/cp-test_ha-861906-m04_ha-861906.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906 "sudo cat /home/docker/cp-test_ha-861906-m04_ha-861906.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906-m04:/home/docker/cp-test.txt ha-861906-m02:/home/docker/cp-test_ha-861906-m04_ha-861906-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m02 "sudo cat /home/docker/cp-test_ha-861906-m04_ha-861906-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 cp ha-861906-m04:/home/docker/cp-test.txt ha-861906-m03:/home/docker/cp-test_ha-861906-m04_ha-861906-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 ssh -n ha-861906-m03 "sudo cat /home/docker/cp-test_ha-861906-m04_ha-861906-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 node stop m02 --alsologtostderr -v 5
E1123 08:21:06.320296 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:21:16.561917 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 node stop m02 --alsologtostderr -v 5: (12.075902132s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-861906 status --alsologtostderr -v 5: exit status 7 (764.981381ms)

                                                
                                                
-- stdout --
	ha-861906
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-861906-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-861906-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-861906-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:21:17.497303 1086516 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:21:17.497481 1086516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:21:17.497494 1086516 out.go:374] Setting ErrFile to fd 2...
	I1123 08:21:17.497500 1086516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:21:17.497785 1086516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:21:17.497997 1086516 out.go:368] Setting JSON to false
	I1123 08:21:17.498044 1086516 mustload.go:66] Loading cluster: ha-861906
	I1123 08:21:17.498130 1086516 notify.go:221] Checking for updates...
	I1123 08:21:17.498492 1086516 config.go:182] Loaded profile config "ha-861906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:21:17.498537 1086516 status.go:174] checking status of ha-861906 ...
	I1123 08:21:17.499487 1086516 cli_runner.go:164] Run: docker container inspect ha-861906 --format={{.State.Status}}
	I1123 08:21:17.518712 1086516 status.go:371] ha-861906 host status = "Running" (err=<nil>)
	I1123 08:21:17.518738 1086516 host.go:66] Checking if "ha-861906" exists ...
	I1123 08:21:17.519048 1086516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-861906
	I1123 08:21:17.543417 1086516 host.go:66] Checking if "ha-861906" exists ...
	I1123 08:21:17.543718 1086516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:21:17.543770 1086516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-861906
	I1123 08:21:17.561507 1086516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34242 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/ha-861906/id_rsa Username:docker}
	I1123 08:21:17.668834 1086516 ssh_runner.go:195] Run: systemctl --version
	I1123 08:21:17.675360 1086516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:21:17.692367 1086516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:21:17.753814 1086516 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-23 08:21:17.742259232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:21:17.754399 1086516 kubeconfig.go:125] found "ha-861906" server: "https://192.168.49.254:8443"
	I1123 08:21:17.754433 1086516 api_server.go:166] Checking apiserver status ...
	I1123 08:21:17.754483 1086516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:21:17.766490 1086516 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1245/cgroup
	I1123 08:21:17.774830 1086516 api_server.go:182] apiserver freezer: "10:freezer:/docker/ae8cfb5e2cd3d08e69084b14c880fa2b2e114fc6ae90d038ac5be631beb0e94b/crio/crio-bb422aa8d9a45843756c61037a8896d3a67ab5fb32937687e4ce5555cd274ada"
	I1123 08:21:17.774909 1086516 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ae8cfb5e2cd3d08e69084b14c880fa2b2e114fc6ae90d038ac5be631beb0e94b/crio/crio-bb422aa8d9a45843756c61037a8896d3a67ab5fb32937687e4ce5555cd274ada/freezer.state
	I1123 08:21:17.782365 1086516 api_server.go:204] freezer state: "THAWED"
	I1123 08:21:17.782392 1086516 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:21:17.790766 1086516 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:21:17.790849 1086516 status.go:463] ha-861906 apiserver status = Running (err=<nil>)
	I1123 08:21:17.790869 1086516 status.go:176] ha-861906 status: &{Name:ha-861906 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:21:17.790887 1086516 status.go:174] checking status of ha-861906-m02 ...
	I1123 08:21:17.791236 1086516 cli_runner.go:164] Run: docker container inspect ha-861906-m02 --format={{.State.Status}}
	I1123 08:21:17.807970 1086516 status.go:371] ha-861906-m02 host status = "Stopped" (err=<nil>)
	I1123 08:21:17.807995 1086516 status.go:384] host is not running, skipping remaining checks
	I1123 08:21:17.808003 1086516 status.go:176] ha-861906-m02 status: &{Name:ha-861906-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:21:17.808024 1086516 status.go:174] checking status of ha-861906-m03 ...
	I1123 08:21:17.808481 1086516 cli_runner.go:164] Run: docker container inspect ha-861906-m03 --format={{.State.Status}}
	I1123 08:21:17.826663 1086516 status.go:371] ha-861906-m03 host status = "Running" (err=<nil>)
	I1123 08:21:17.826715 1086516 host.go:66] Checking if "ha-861906-m03" exists ...
	I1123 08:21:17.827034 1086516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-861906-m03
	I1123 08:21:17.843732 1086516 host.go:66] Checking if "ha-861906-m03" exists ...
	I1123 08:21:17.844033 1086516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:21:17.844071 1086516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-861906-m03
	I1123 08:21:17.867672 1086516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34252 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/ha-861906-m03/id_rsa Username:docker}
	I1123 08:21:17.973178 1086516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:21:17.986081 1086516 kubeconfig.go:125] found "ha-861906" server: "https://192.168.49.254:8443"
	I1123 08:21:17.986112 1086516 api_server.go:166] Checking apiserver status ...
	I1123 08:21:17.986156 1086516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:21:17.997836 1086516 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	I1123 08:21:18.010578 1086516 api_server.go:182] apiserver freezer: "10:freezer:/docker/4219588a71316a7949d2fc6bdafefcf92029f12a88f222958a7ad398418a2249/crio/crio-e59026efddf33e25e62ce9429562884566261fb19539e39a3a67e6052949885f"
	I1123 08:21:18.010653 1086516 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4219588a71316a7949d2fc6bdafefcf92029f12a88f222958a7ad398418a2249/crio/crio-e59026efddf33e25e62ce9429562884566261fb19539e39a3a67e6052949885f/freezer.state
	I1123 08:21:18.018904 1086516 api_server.go:204] freezer state: "THAWED"
	I1123 08:21:18.018980 1086516 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:21:18.027372 1086516 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:21:18.027400 1086516 status.go:463] ha-861906-m03 apiserver status = Running (err=<nil>)
	I1123 08:21:18.027410 1086516 status.go:176] ha-861906-m03 status: &{Name:ha-861906-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:21:18.027426 1086516 status.go:174] checking status of ha-861906-m04 ...
	I1123 08:21:18.027741 1086516 cli_runner.go:164] Run: docker container inspect ha-861906-m04 --format={{.State.Status}}
	I1123 08:21:18.047938 1086516 status.go:371] ha-861906-m04 host status = "Running" (err=<nil>)
	I1123 08:21:18.047965 1086516 host.go:66] Checking if "ha-861906-m04" exists ...
	I1123 08:21:18.048400 1086516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-861906-m04
	I1123 08:21:18.072728 1086516 host.go:66] Checking if "ha-861906-m04" exists ...
	I1123 08:21:18.073054 1086516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:21:18.073101 1086516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-861906-m04
	I1123 08:21:18.091748 1086516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34257 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/ha-861906-m04/id_rsa Username:docker}
	I1123 08:21:18.196448 1086516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:21:18.209130 1086516 status.go:176] ha-861906-m04 status: &{Name:ha-861906-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 node start m02 --alsologtostderr -v 5
E1123 08:21:37.043688 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 node start m02 --alsologtostderr -v 5: (28.45549436s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 status --alsologtostderr -v 5: (1.172851697s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.420663236s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (144.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 stop --alsologtostderr -v 5: (27.132660898s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 start --wait true --alsologtostderr -v 5
E1123 08:22:18.005052 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:39.927150 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:23:50.642687 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 start --wait true --alsologtostderr -v 5: (1m56.950777888s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (144.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 node delete m03 --alsologtostderr -v 5: (10.660989671s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 stop --alsologtostderr -v 5: (35.961361306s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-861906 status --alsologtostderr -v 5: exit status 7 (116.117527ms)

                                                
                                                
-- stdout --
	ha-861906
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-861906-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-861906-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:25:02.984469 1098398 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:25:02.984600 1098398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:25:02.984614 1098398 out.go:374] Setting ErrFile to fd 2...
	I1123 08:25:02.984620 1098398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:25:02.984954 1098398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:25:02.985134 1098398 out.go:368] Setting JSON to false
	I1123 08:25:02.985166 1098398 mustload.go:66] Loading cluster: ha-861906
	I1123 08:25:02.985220 1098398 notify.go:221] Checking for updates...
	I1123 08:25:02.985583 1098398 config.go:182] Loaded profile config "ha-861906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:25:02.985603 1098398 status.go:174] checking status of ha-861906 ...
	I1123 08:25:02.986514 1098398 cli_runner.go:164] Run: docker container inspect ha-861906 --format={{.State.Status}}
	I1123 08:25:03.008849 1098398 status.go:371] ha-861906 host status = "Stopped" (err=<nil>)
	I1123 08:25:03.008878 1098398 status.go:384] host is not running, skipping remaining checks
	I1123 08:25:03.008885 1098398 status.go:176] ha-861906 status: &{Name:ha-861906 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:25:03.008927 1098398 status.go:174] checking status of ha-861906-m02 ...
	I1123 08:25:03.009252 1098398 cli_runner.go:164] Run: docker container inspect ha-861906-m02 --format={{.State.Status}}
	I1123 08:25:03.027217 1098398 status.go:371] ha-861906-m02 host status = "Stopped" (err=<nil>)
	I1123 08:25:03.027241 1098398 status.go:384] host is not running, skipping remaining checks
	I1123 08:25:03.027259 1098398 status.go:176] ha-861906-m02 status: &{Name:ha-861906-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:25:03.027280 1098398 status.go:174] checking status of ha-861906-m04 ...
	I1123 08:25:03.027593 1098398 cli_runner.go:164] Run: docker container inspect ha-861906-m04 --format={{.State.Status}}
	I1123 08:25:03.047922 1098398 status.go:371] ha-861906-m04 host status = "Stopped" (err=<nil>)
	I1123 08:25:03.047945 1098398 status.go:384] host is not running, skipping remaining checks
	I1123 08:25:03.047953 1098398 status.go:176] ha-861906-m04 status: &{Name:ha-861906-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (93.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1123 08:25:56.067410 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:23.769594 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m32.912342766s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (93.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (55.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 node add --control-plane --alsologtostderr -v 5: (54.611999728s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-861906 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-861906 status --alsologtostderr -v 5: (1.061812346s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (55.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.110507918s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-106359 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1123 08:28:50.642359 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-106359 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.162066906s)
--- PASS: TestJSONOutput/start/Command (79.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-106359 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-106359 --output=json --user=testUser: (5.868984849s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-516776 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-516776 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (90.977489ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"29f577c2-9965-457f-88f7-92741f549881","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-516776] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b81811c0-a16f-45b1-859f-fd58a225a92c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21966"}}
	{"specversion":"1.0","id":"05210b78-9afd-44f0-8937-59a1b4628349","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"480829f5-8b11-458a-bdfb-242a7d7d7ccb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig"}}
	{"specversion":"1.0","id":"94d77680-2b1f-419e-9331-1a50c258620e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube"}}
	{"specversion":"1.0","id":"169f98d9-f0a3-426c-bb31-e219f2f5cbe0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f56ba59f-9437-438f-befd-972550247d2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6c7dac1c-6770-480c-b2cd-d064eb651f3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-516776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-516776
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-346755 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-346755 --network=: (42.05325985s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-346755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-346755
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-346755: (2.725612642s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.81s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-833905 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-833905 --network=bridge: (36.645217938s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-833905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-833905
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-833905: (2.135311786s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.81s)

                                                
                                    
x
+
TestKicExistingNetwork (36.33s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1123 08:30:39.523417 1043159 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1123 08:30:39.538351 1043159 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1123 08:30:39.538432 1043159 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1123 08:30:39.538451 1043159 cli_runner.go:164] Run: docker network inspect existing-network
W1123 08:30:39.553861 1043159 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1123 08:30:39.553892 1043159 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1123 08:30:39.553905 1043159 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1123 08:30:39.554019 1043159 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1123 08:30:39.572314 1043159 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-32d396d9f7df IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:9b:29:4a:5c:ab} reservation:<nil>}
I1123 08:30:39.572748 1043159 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d46280}
I1123 08:30:39.572777 1043159 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1123 08:30:39.572833 1043159 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1123 08:30:39.635846 1043159 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-872210 --network=existing-network
E1123 08:30:56.070744 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-872210 --network=existing-network: (34.052512787s)
helpers_test.go:175: Cleaning up "existing-network-872210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-872210
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-872210: (2.121501265s)
I1123 08:31:15.826479 1043159 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.33s)

                                                
                                    
x
+
TestKicCustomSubnet (33.74s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-216982 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-216982 --subnet=192.168.60.0/24: (31.484424317s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-216982 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-216982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-216982
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-216982: (2.224854148s)
--- PASS: TestKicCustomSubnet (33.74s)

                                                
                                    
x
+
TestKicStaticIP (37.01s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-789562 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-789562 --static-ip=192.168.200.200: (34.681128297s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-789562 ip
helpers_test.go:175: Cleaning up "static-ip-789562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-789562
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-789562: (2.184574921s)
--- PASS: TestKicStaticIP (37.01s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.89s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-483085 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-483085 --driver=docker  --container-runtime=crio: (31.342952477s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-485571 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-485571 --driver=docker  --container-runtime=crio: (32.946989214s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-483085
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-485571
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-485571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-485571
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-485571: (2.140285788s)
helpers_test.go:175: Cleaning up "first-483085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-483085
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-483085: (2.03700875s)
--- PASS: TestMinikubeProfile (69.89s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-506308 --memory=3072 --mount-string /tmp/TestMountStartserial2796445664/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-506308 --memory=3072 --mount-string /tmp/TestMountStartserial2796445664/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.96547007s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-506308 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-508122 --memory=3072 --mount-string /tmp/TestMountStartserial2796445664/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1123 08:33:50.645613 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-508122 --memory=3072 --mount-string /tmp/TestMountStartserial2796445664/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.614428663s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-508122 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-506308 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-506308 --alsologtostderr -v=5: (1.697867809s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-508122 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-508122
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-508122: (1.289236484s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.13s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-508122
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-508122: (7.131846753s)
--- PASS: TestMountStart/serial/RestartStopped (8.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-508122 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (102.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-727459 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-727459 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m41.978194004s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (102.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-727459 -- rollout status deployment/busybox: (2.981422914s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- exec busybox-7b57f96db7-b5zjq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- exec busybox-7b57f96db7-ndhwg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- exec busybox-7b57f96db7-b5zjq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- exec busybox-7b57f96db7-ndhwg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- exec busybox-7b57f96db7-b5zjq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- exec busybox-7b57f96db7-ndhwg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- exec busybox-7b57f96db7-b5zjq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- exec busybox-7b57f96db7-b5zjq -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- exec busybox-7b57f96db7-ndhwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1123 08:35:56.067476 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-727459 -- exec busybox-7b57f96db7-ndhwg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-727459 -v=5 --alsologtostderr
E1123 08:36:53.709449 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-727459 -v=5 --alsologtostderr: (57.81678043s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-727459 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 cp testdata/cp-test.txt multinode-727459:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 cp multinode-727459:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2914708642/001/cp-test_multinode-727459.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 cp multinode-727459:/home/docker/cp-test.txt multinode-727459-m02:/home/docker/cp-test_multinode-727459_multinode-727459-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459-m02 "sudo cat /home/docker/cp-test_multinode-727459_multinode-727459-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 cp multinode-727459:/home/docker/cp-test.txt multinode-727459-m03:/home/docker/cp-test_multinode-727459_multinode-727459-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459-m03 "sudo cat /home/docker/cp-test_multinode-727459_multinode-727459-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 cp testdata/cp-test.txt multinode-727459-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 cp multinode-727459-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2914708642/001/cp-test_multinode-727459-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 cp multinode-727459-m02:/home/docker/cp-test.txt multinode-727459:/home/docker/cp-test_multinode-727459-m02_multinode-727459.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459 "sudo cat /home/docker/cp-test_multinode-727459-m02_multinode-727459.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 cp multinode-727459-m02:/home/docker/cp-test.txt multinode-727459-m03:/home/docker/cp-test_multinode-727459-m02_multinode-727459-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459-m03 "sudo cat /home/docker/cp-test_multinode-727459-m02_multinode-727459-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 cp testdata/cp-test.txt multinode-727459-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 cp multinode-727459-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2914708642/001/cp-test_multinode-727459-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 cp multinode-727459-m03:/home/docker/cp-test.txt multinode-727459:/home/docker/cp-test_multinode-727459-m03_multinode-727459.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459 "sudo cat /home/docker/cp-test_multinode-727459-m03_multinode-727459.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 cp multinode-727459-m03:/home/docker/cp-test.txt multinode-727459-m02:/home/docker/cp-test_multinode-727459-m03_multinode-727459-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 ssh -n multinode-727459-m02 "sudo cat /home/docker/cp-test_multinode-727459-m03_multinode-727459-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-727459 node stop m03: (1.303643898s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-727459 status: exit status 7 (521.013196ms)

                                                
                                                
-- stdout --
	multinode-727459
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-727459-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-727459-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-727459 status --alsologtostderr: exit status 7 (534.151071ms)

                                                
                                                
-- stdout --
	multinode-727459
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-727459-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-727459-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:37:07.927125 1148732 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:37:07.927319 1148732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:37:07.927330 1148732 out.go:374] Setting ErrFile to fd 2...
	I1123 08:37:07.927336 1148732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:37:07.927593 1148732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:37:07.927840 1148732 out.go:368] Setting JSON to false
	I1123 08:37:07.927875 1148732 mustload.go:66] Loading cluster: multinode-727459
	I1123 08:37:07.927989 1148732 notify.go:221] Checking for updates...
	I1123 08:37:07.928425 1148732 config.go:182] Loaded profile config "multinode-727459": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:37:07.928468 1148732 status.go:174] checking status of multinode-727459 ...
	I1123 08:37:07.929036 1148732 cli_runner.go:164] Run: docker container inspect multinode-727459 --format={{.State.Status}}
	I1123 08:37:07.949004 1148732 status.go:371] multinode-727459 host status = "Running" (err=<nil>)
	I1123 08:37:07.949030 1148732 host.go:66] Checking if "multinode-727459" exists ...
	I1123 08:37:07.949330 1148732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-727459
	I1123 08:37:07.976337 1148732 host.go:66] Checking if "multinode-727459" exists ...
	I1123 08:37:07.976647 1148732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:37:07.976703 1148732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-727459
	I1123 08:37:07.995226 1148732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34362 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/multinode-727459/id_rsa Username:docker}
	I1123 08:37:08.100623 1148732 ssh_runner.go:195] Run: systemctl --version
	I1123 08:37:08.106858 1148732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:37:08.123993 1148732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:37:08.178975 1148732 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 08:37:08.169877286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:37:08.179792 1148732 kubeconfig.go:125] found "multinode-727459" server: "https://192.168.67.2:8443"
	I1123 08:37:08.179832 1148732 api_server.go:166] Checking apiserver status ...
	I1123 08:37:08.179895 1148732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:37:08.191421 1148732 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup
	I1123 08:37:08.199588 1148732 api_server.go:182] apiserver freezer: "10:freezer:/docker/c7bc85a4fcb2d5270b227788d88a5ec353acca4360c8d18d98af83cf5f692476/crio/crio-5ae43e759c262d06b81b426309fdc2d59b6075314d4ab64ec44c4b64ed043c26"
	I1123 08:37:08.199682 1148732 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c7bc85a4fcb2d5270b227788d88a5ec353acca4360c8d18d98af83cf5f692476/crio/crio-5ae43e759c262d06b81b426309fdc2d59b6075314d4ab64ec44c4b64ed043c26/freezer.state
	I1123 08:37:08.206981 1148732 api_server.go:204] freezer state: "THAWED"
	I1123 08:37:08.207010 1148732 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1123 08:37:08.215083 1148732 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1123 08:37:08.215111 1148732 status.go:463] multinode-727459 apiserver status = Running (err=<nil>)
	I1123 08:37:08.215122 1148732 status.go:176] multinode-727459 status: &{Name:multinode-727459 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:37:08.215137 1148732 status.go:174] checking status of multinode-727459-m02 ...
	I1123 08:37:08.215467 1148732 cli_runner.go:164] Run: docker container inspect multinode-727459-m02 --format={{.State.Status}}
	I1123 08:37:08.231509 1148732 status.go:371] multinode-727459-m02 host status = "Running" (err=<nil>)
	I1123 08:37:08.231535 1148732 host.go:66] Checking if "multinode-727459-m02" exists ...
	I1123 08:37:08.231843 1148732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-727459-m02
	I1123 08:37:08.247986 1148732 host.go:66] Checking if "multinode-727459-m02" exists ...
	I1123 08:37:08.248300 1148732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:37:08.248344 1148732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-727459-m02
	I1123 08:37:08.264654 1148732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34367 SSHKeyPath:/home/jenkins/minikube-integration/21966-1041293/.minikube/machines/multinode-727459-m02/id_rsa Username:docker}
	I1123 08:37:08.368197 1148732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:37:08.380951 1148732 status.go:176] multinode-727459-m02 status: &{Name:multinode-727459-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:37:08.380984 1148732 status.go:174] checking status of multinode-727459-m03 ...
	I1123 08:37:08.381310 1148732 cli_runner.go:164] Run: docker container inspect multinode-727459-m03 --format={{.State.Status}}
	I1123 08:37:08.398119 1148732 status.go:371] multinode-727459-m03 host status = "Stopped" (err=<nil>)
	I1123 08:37:08.398141 1148732 status.go:384] host is not running, skipping remaining checks
	I1123 08:37:08.398148 1148732 status.go:176] multinode-727459-m03 status: &{Name:multinode-727459-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-727459 node start m03 -v=5 --alsologtostderr: (7.121626817s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-727459
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-727459
E1123 08:37:19.131319 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-727459: (25.033191049s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-727459 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-727459 --wait=true -v=5 --alsologtostderr: (55.236528521s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-727459
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-727459 node delete m03: (4.950869436s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 stop
E1123 08:38:50.643352 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-727459 stop: (23.769993472s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-727459 status: exit status 7 (97.298925ms)

                                                
                                                
-- stdout --
	multinode-727459
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-727459-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-727459 status --alsologtostderr: exit status 7 (107.943182ms)

                                                
                                                
-- stdout --
	multinode-727459
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-727459-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:39:06.316537 1156545 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:39:06.316653 1156545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:39:06.316663 1156545 out.go:374] Setting ErrFile to fd 2...
	I1123 08:39:06.316669 1156545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:39:06.317036 1156545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:39:06.317251 1156545 out.go:368] Setting JSON to false
	I1123 08:39:06.317277 1156545 mustload.go:66] Loading cluster: multinode-727459
	I1123 08:39:06.318346 1156545 config.go:182] Loaded profile config "multinode-727459": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:39:06.318368 1156545 status.go:174] checking status of multinode-727459 ...
	I1123 08:39:06.318625 1156545 notify.go:221] Checking for updates...
	I1123 08:39:06.318932 1156545 cli_runner.go:164] Run: docker container inspect multinode-727459 --format={{.State.Status}}
	I1123 08:39:06.341400 1156545 status.go:371] multinode-727459 host status = "Stopped" (err=<nil>)
	I1123 08:39:06.341424 1156545 status.go:384] host is not running, skipping remaining checks
	I1123 08:39:06.341431 1156545 status.go:176] multinode-727459 status: &{Name:multinode-727459 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:39:06.341453 1156545 status.go:174] checking status of multinode-727459-m02 ...
	I1123 08:39:06.341768 1156545 cli_runner.go:164] Run: docker container inspect multinode-727459-m02 --format={{.State.Status}}
	I1123 08:39:06.377260 1156545 status.go:371] multinode-727459-m02 host status = "Stopped" (err=<nil>)
	I1123 08:39:06.377287 1156545 status.go:384] host is not running, skipping remaining checks
	I1123 08:39:06.377298 1156545 status.go:176] multinode-727459-m02 status: &{Name:multinode-727459-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-727459 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-727459 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.749439388s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-727459 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-727459
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-727459-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-727459-m02 --driver=docker  --container-runtime=crio: exit status 14 (95.174913ms)

                                                
                                                
-- stdout --
	* [multinode-727459-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-727459-m02' is duplicated with machine name 'multinode-727459-m02' in profile 'multinode-727459'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-727459-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-727459-m03 --driver=docker  --container-runtime=crio: (33.507748851s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-727459
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-727459: exit status 80 (345.881933ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-727459 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-727459-m03 already exists in multinode-727459-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-727459-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-727459-m03: (2.130180446s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.13s)

                                                
                                    
x
+
TestPreload (126.86s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-162256 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1123 08:40:56.067354 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-162256 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.426747005s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-162256 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-162256 image pull gcr.io/k8s-minikube/busybox: (2.106591358s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-162256
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-162256: (5.979045796s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-162256 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-162256 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (53.628357842s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-162256 image list
helpers_test.go:175: Cleaning up "test-preload-162256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-162256
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-162256: (2.479735123s)
--- PASS: TestPreload (126.86s)

                                                
                                    
x
+
TestScheduledStopUnix (110.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-079631 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-079631 --memory=3072 --driver=docker  --container-runtime=crio: (33.784300711s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-079631 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:43:16.949168 1170518 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:43:16.949362 1170518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:16.949389 1170518 out.go:374] Setting ErrFile to fd 2...
	I1123 08:43:16.949408 1170518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:16.949715 1170518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:43:16.950005 1170518 out.go:368] Setting JSON to false
	I1123 08:43:16.950183 1170518 mustload.go:66] Loading cluster: scheduled-stop-079631
	I1123 08:43:16.950603 1170518 config.go:182] Loaded profile config "scheduled-stop-079631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:43:16.950728 1170518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/config.json ...
	I1123 08:43:16.950967 1170518 mustload.go:66] Loading cluster: scheduled-stop-079631
	I1123 08:43:16.951173 1170518 config.go:182] Loaded profile config "scheduled-stop-079631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-079631 -n scheduled-stop-079631
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-079631 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:43:17.394793 1170606 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:43:17.394908 1170606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:17.394919 1170606 out.go:374] Setting ErrFile to fd 2...
	I1123 08:43:17.394923 1170606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:17.395300 1170606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:43:17.395587 1170606 out.go:368] Setting JSON to false
	I1123 08:43:17.395856 1170606 daemonize_unix.go:73] killing process 1170536 as it is an old scheduled stop
	I1123 08:43:17.395937 1170606 mustload.go:66] Loading cluster: scheduled-stop-079631
	I1123 08:43:17.400489 1170606 config.go:182] Loaded profile config "scheduled-stop-079631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:43:17.400626 1170606 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/config.json ...
	I1123 08:43:17.400896 1170606 mustload.go:66] Loading cluster: scheduled-stop-079631
	I1123 08:43:17.401030 1170606 config.go:182] Loaded profile config "scheduled-stop-079631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 08:43:17.406590 1043159 retry.go:31] will retry after 144.273µs: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.407739 1043159 retry.go:31] will retry after 187.014µs: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.408821 1043159 retry.go:31] will retry after 167.339µs: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.410167 1043159 retry.go:31] will retry after 227.827µs: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.412215 1043159 retry.go:31] will retry after 732.61µs: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.413351 1043159 retry.go:31] will retry after 550.794µs: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.414989 1043159 retry.go:31] will retry after 1.077614ms: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.417191 1043159 retry.go:31] will retry after 1.097396ms: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.419385 1043159 retry.go:31] will retry after 2.976377ms: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.423269 1043159 retry.go:31] will retry after 3.684306ms: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.427458 1043159 retry.go:31] will retry after 7.456981ms: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.435719 1043159 retry.go:31] will retry after 5.538522ms: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.442323 1043159 retry.go:31] will retry after 14.377714ms: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.457551 1043159 retry.go:31] will retry after 28.188973ms: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.486759 1043159 retry.go:31] will retry after 18.329935ms: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
I1123 08:43:17.505896 1043159 retry.go:31] will retry after 55.941342ms: open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-079631 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-079631 -n scheduled-stop-079631
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-079631
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-079631 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:43:43.365158 1170971 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:43:43.365290 1170971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:43.365301 1170971 out.go:374] Setting ErrFile to fd 2...
	I1123 08:43:43.365306 1170971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:43.365606 1170971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:43:43.365865 1170971 out.go:368] Setting JSON to false
	I1123 08:43:43.365959 1170971 mustload.go:66] Loading cluster: scheduled-stop-079631
	I1123 08:43:43.366314 1170971 config.go:182] Loaded profile config "scheduled-stop-079631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:43:43.366390 1170971 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/scheduled-stop-079631/config.json ...
	I1123 08:43:43.366564 1170971 mustload.go:66] Loading cluster: scheduled-stop-079631
	I1123 08:43:43.366667 1170971 config.go:182] Loaded profile config "scheduled-stop-079631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
E1123 08:43:50.643152 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-079631
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-079631: exit status 7 (78.903457ms)

                                                
                                                
-- stdout --
	scheduled-stop-079631
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-079631 -n scheduled-stop-079631
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-079631 -n scheduled-stop-079631: exit status 7 (66.990887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-079631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-079631
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-079631: (5.265663814s)
--- PASS: TestScheduledStopUnix (110.69s)

                                                
                                    
x
+
TestInsufficientStorage (13.35s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-437098 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-437098 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.716615891s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b3c3578d-2b2b-4071-bb01-52bfd4614626","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-437098] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"41c67639-7ef0-4c0b-a04f-ac8e04cddd60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21966"}}
	{"specversion":"1.0","id":"6e86ffa8-675b-422a-84bf-75625b5d110e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b438e4d7-1364-403a-9099-ff82353cf719","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig"}}
	{"specversion":"1.0","id":"93a1391b-0220-41d6-9bcf-88b7739bfdbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube"}}
	{"specversion":"1.0","id":"9cad728d-9eac-4319-b810-73ea79ce0aa5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d73ee7c5-44c6-4e93-95fc-0e58636635db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"46698235-c82c-4381-90c8-9ed052d4e032","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"455c086a-135d-429c-a7a9-37909ea99cd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"26ae6972-db26-401b-9be1-a1b7c3fa51a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"03fd524c-aaff-4c29-b23a-1109aad4d9b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1b66394c-e549-4837-99e3-0ac33f67ea1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-437098\" primary control-plane node in \"insufficient-storage-437098\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d4cbd5c-7977-4273-a7c4-b173a890f624","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f404eff-bd04-4a76-81fd-397fc7818722","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"162cfa14-a300-4672-a25b-31bc8df180bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-437098 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-437098 --output=json --layout=cluster: exit status 7 (302.975351ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-437098","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-437098","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:44:44.791041 1172702 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-437098" does not appear in /home/jenkins/minikube-integration/21966-1041293/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-437098 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-437098 --output=json --layout=cluster: exit status 7 (332.682102ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-437098","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-437098","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:44:45.118599 1172769 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-437098" does not appear in /home/jenkins/minikube-integration/21966-1041293/kubeconfig
	E1123 08:44:45.131053 1172769 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/insufficient-storage-437098/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-437098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-437098
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-437098: (1.997333239s)
--- PASS: TestInsufficientStorage (13.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (53.41s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.833395822 start -p running-upgrade-462653 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.833395822 start -p running-upgrade-462653 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.803363107s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-462653 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-462653 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.707461454s)
helpers_test.go:175: Cleaning up "running-upgrade-462653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-462653
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-462653: (2.107707476s)
--- PASS: TestRunningBinaryUpgrade (53.41s)

                                                
                                    
x
+
TestKubernetesUpgrade (364.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-354226 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-354226 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.815007491s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-354226
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-354226: (1.498704631s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-354226 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-354226 status --format={{.Host}}: exit status 7 (70.943622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-354226 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-354226 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.631004836s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-354226 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-354226 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-354226 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (118.105449ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-354226] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-354226
	    minikube start -p kubernetes-upgrade-354226 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3542262 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-354226 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-354226 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-354226 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.722639216s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-354226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-354226
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-354226: (2.206706593s)
--- PASS: TestKubernetesUpgrade (364.20s)

                                                
                                    
x
+
TestMissingContainerUpgrade (119.57s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.535425619 start -p missing-upgrade-232904 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.535425619 start -p missing-upgrade-232904 --memory=3072 --driver=docker  --container-runtime=crio: (1m3.767129859s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-232904
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-232904
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-232904 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-232904 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.578655515s)
helpers_test.go:175: Cleaning up "missing-upgrade-232904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-232904
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-232904: (2.126306188s)
--- PASS: TestMissingContainerUpgrade (119.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-293465 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-293465 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (99.103951ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-293465] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-293465 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-293465 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (45.383139371s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-293465 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-293465 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-293465 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.872199935s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-293465 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-293465 status -o json: exit status 2 (340.949151ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-293465","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-293465
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-293465: (2.405599709s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-293465 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-293465 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.252186089s)
--- PASS: TestNoKubernetes/serial/Start (9.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21966-1041293/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-293465 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-293465 "sudo systemctl is-active --quiet service kubelet": exit status 1 (375.155652ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-293465
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-293465: (1.390303267s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-293465 --driver=docker  --container-runtime=crio
E1123 08:45:56.067548 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-293465 --driver=docker  --container-runtime=crio: (7.162033499s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-293465 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-293465 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.940378ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (57.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3885614531 start -p stopped-upgrade-885580 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3885614531 start -p stopped-upgrade-885580 --memory=3072 --vm-driver=docker  --container-runtime=crio: (38.472426919s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3885614531 -p stopped-upgrade-885580 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3885614531 -p stopped-upgrade-885580 stop: (1.217543641s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-885580 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-885580 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.103210615s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (57.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-885580
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-885580: (1.139039517s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                    
x
+
TestPause/serial/Start (80.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-041000 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1123 08:48:50.643134 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-041000 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m20.358300816s)
--- PASS: TestPause/serial/Start (80.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-041000 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-041000 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.802591541s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-082524 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-082524 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (254.734928ms)

                                                
                                                
-- stdout --
	* [false-082524] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:51:23.170190 1209826 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:51:23.170723 1209826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:51:23.170771 1209826 out.go:374] Setting ErrFile to fd 2...
	I1123 08:51:23.170791 1209826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:51:23.171075 1209826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-1041293/.minikube/bin
	I1123 08:51:23.171555 1209826 out.go:368] Setting JSON to false
	I1123 08:51:23.172474 1209826 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34429,"bootTime":1763853455,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 08:51:23.172574 1209826 start.go:143] virtualization:  
	I1123 08:51:23.176405 1209826 out.go:179] * [false-082524] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:51:23.179473 1209826 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:51:23.179605 1209826 notify.go:221] Checking for updates...
	I1123 08:51:23.185055 1209826 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:51:23.188028 1209826 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-1041293/kubeconfig
	I1123 08:51:23.190957 1209826 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-1041293/.minikube
	I1123 08:51:23.193957 1209826 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:51:23.197215 1209826 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:51:23.200655 1209826 config.go:182] Loaded profile config "kubernetes-upgrade-354226": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:51:23.200811 1209826 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:51:23.244713 1209826 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:51:23.244822 1209826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:51:23.338438 1209826 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:51:23.325694041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:51:23.338536 1209826 docker.go:319] overlay module found
	I1123 08:51:23.341642 1209826 out.go:179] * Using the docker driver based on user configuration
	I1123 08:51:23.344817 1209826 start.go:309] selected driver: docker
	I1123 08:51:23.344838 1209826 start.go:927] validating driver "docker" against <nil>
	I1123 08:51:23.344852 1209826 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:51:23.348354 1209826 out.go:203] 
	W1123 08:51:23.351158 1209826 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1123 08:51:23.353946 1209826 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-082524 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-082524

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-082524

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-082524

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-082524

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-082524

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-082524

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-082524

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-082524

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-082524

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-082524

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-082524

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-082524" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-082524" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:47:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-354226
contexts:
- context:
cluster: kubernetes-upgrade-354226
user: kubernetes-upgrade-354226
name: kubernetes-upgrade-354226
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-354226
user:
client-certificate: /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/kubernetes-upgrade-354226/client.crt
client-key: /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/kubernetes-upgrade-354226/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-082524

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-082524"

                                                
                                                
----------------------- debugLogs end: false-082524 [took: 4.527635978s] --------------------------------
helpers_test.go:175: Cleaning up "false-082524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-082524
--- PASS: TestNetworkPlugins/group/false (5.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1123 08:53:33.711237 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:50.643096 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:53:59.132786 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m3.916119549s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-283312 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a288cc83-ae5e-414e-b584-9cd4bebbd5e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a288cc83-ae5e-414e-b584-9cd4bebbd5e8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.007720398s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-283312 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-283312 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-283312 --alsologtostderr -v=3: (12.001103544s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-283312 -n old-k8s-version-283312
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-283312 -n old-k8s-version-283312: exit status 7 (75.043007ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-283312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-283312 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.535854221s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-283312 -n old-k8s-version-283312
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6t89s" [0813d124-6f61-456a-9a7d-79a6b4d2e1a3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003219437s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6t89s" [0813d124-6f61-456a-9a7d-79a6b4d2e1a3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004014895s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-283312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-283312 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.792101016s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m23.282065294s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-262764 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5e87a35a-9a78-4158-8a26-e6618c72aa86] Pending
helpers_test.go:352: "busybox" [5e87a35a-9a78-4158-8a26-e6618c72aa86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5e87a35a-9a78-4158-8a26-e6618c72aa86] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005044266s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-262764 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-262764 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-262764 --alsologtostderr -v=3: (12.014735993s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-262764 -n default-k8s-diff-port-262764
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-262764 -n default-k8s-diff-port-262764: exit status 7 (73.904163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-262764 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-262764 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.738851602s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-262764 -n default-k8s-diff-port-262764
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-879861 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [58c79ac6-29f0-45fb-951d-e92b37939a41] Pending
helpers_test.go:352: "busybox" [58c79ac6-29f0-45fb-951d-e92b37939a41] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [58c79ac6-29f0-45fb-951d-e92b37939a41] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003459246s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-879861 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-879861 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-879861 --alsologtostderr -v=3: (12.069591104s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879861 -n embed-certs-879861
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879861 -n embed-certs-879861: exit status 7 (81.058886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-879861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-879861 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.251932155s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879861 -n embed-certs-879861
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pcsrh" [30d0a90d-21de-40ab-802a-ef4067be718b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002878307s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pcsrh" [30d0a90d-21de-40ab-802a-ef4067be718b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003974243s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-262764 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-262764 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 08:58:50.642629 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/addons-782760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m8.679934981s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ld9hg" [d3f15842-9da9-4d8d-ae2b-dadc7e55e00a] Running
E1123 08:58:59.911971 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:58:59.918564 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:58:59.931940 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:58:59.953382 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:58:59.995125 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:59:00.076959 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:59:00.238938 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:59:00.560720 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.014261603s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ld9hg" [d3f15842-9da9-4d8d-ae2b-dadc7e55e00a] Running
E1123 08:59:01.201987 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:59:02.483742 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:59:05.045379 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003816475s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-879861 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-879861 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 08:59:20.411871 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:59:40.893193 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (37.893520721s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-591175 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [955d780f-d21f-4c17-a520-a1df10d9609a] Pending
helpers_test.go:352: "busybox" [955d780f-d21f-4c17-a520-a1df10d9609a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [955d780f-d21f-4c17-a520-a1df10d9609a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003523313s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-591175 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-261704 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-261704 --alsologtostderr -v=3: (1.485899385s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-591175 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-591175 --alsologtostderr -v=3: (12.664230595s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-261704 -n newest-cni-261704
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-261704 -n newest-cni-261704: exit status 7 (110.857568ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-261704 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-261704 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.376806453s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-261704 -n newest-cni-261704
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-591175 -n no-preload-591175
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-591175 -n no-preload-591175: exit status 7 (103.862628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-591175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (57.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-591175 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (56.847233939s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-591175 -n no-preload-591175
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (57.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-261704 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1123 09:00:56.067726 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/functional-333688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m21.966793967s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pjsjj" [362aa06f-c276-4d53-b60f-02c2feed6668] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003760312s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pjsjj" [362aa06f-c276-4d53-b60f-02c2feed6668] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003052075s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-591175 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-591175 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1123 09:01:43.776272 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m20.889148086s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-082524 "pgrep -a kubelet"
I1123 09:01:49.465565 1043159 config.go:182] Loaded profile config "auto-082524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-082524 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hwj6w" [01d546b0-19da-4ed3-a130-3fd9bb2aea8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hwj6w" [01d546b0-19da-4ed3-a130-3fd9bb2aea8f] Running
E1123 09:01:56.995743 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:01:57.002093 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:01:57.013438 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:01:57.034865 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:01:57.076185 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:01:57.157518 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:01:57.318970 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:01:57.640439 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:01:58.282628 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:01:59.564570 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003395573s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-082524 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (59.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1123 09:02:37.977376 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (59.261356561s)
--- PASS: TestNetworkPlugins/group/calico/Start (59.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-57czq" [c4bea4d7-7fed-4e5d-ba7f-0ef19dfe021b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003695429s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-082524 "pgrep -a kubelet"
I1123 09:02:57.422970 1043159 config.go:182] Loaded profile config "kindnet-082524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-082524 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bqbfc" [55656e7d-6a3d-4433-ad0f-b4be70ca731e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bqbfc" [55656e7d-6a3d-4433-ad0f-b4be70ca731e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003976752s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-082524 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-cx989" [1feca282-82e5-475e-8740-d3741f31c08c] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004831869s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-082524 "pgrep -a kubelet"
I1123 09:03:28.481606 1043159 config.go:182] Loaded profile config "calico-082524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-082524 replace --force -f testdata/netcat-deployment.yaml
I1123 09:03:28.823429 1043159 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pxcxv" [1a960315-3a64-4b7e-8364-25d0064a209b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pxcxv" [1a960315-3a64-4b7e-8364-25d0064a209b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004047471s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m3.595585606s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-082524 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1123 09:04:27.618224 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/old-k8s-version-283312/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m23.8311944s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-082524 "pgrep -a kubelet"
I1123 09:04:37.777347 1043159 config.go:182] Loaded profile config "custom-flannel-082524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-082524 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rs6cv" [1c261a73-9848-4851-a1a6-7d58e71e7a6d] Pending
E1123 09:04:40.860772 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/default-k8s-diff-port-262764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-rs6cv" [1c261a73-9848-4851-a1a6-7d58e71e7a6d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rs6cv" [1c261a73-9848-4851-a1a6-7d58e71e7a6d] Running
E1123 09:04:45.823102 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:04:45.829569 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:04:45.840930 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:04:45.862400 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:04:45.903877 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:04:45.985343 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:04:46.147058 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:04:46.468296 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:04:47.110236 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:04:48.392295 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003491195s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-082524 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1123 09:05:26.800337 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.004714095s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-082524 "pgrep -a kubelet"
I1123 09:05:30.516941 1043159 config.go:182] Loaded profile config "enable-default-cni-082524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-082524 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4c6kr" [c7629f24-06ed-47a5-9765-e33893e8eafa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4c6kr" [c7629f24-06ed-47a5-9765-e33893e8eafa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003929991s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-082524 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1123 09:06:07.762346 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-082524 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m21.412401112s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-bvn79" [5ab66a26-38a0-4b3c-8d1d-45833bf178b2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003059026s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-082524 "pgrep -a kubelet"
I1123 09:06:14.757241 1043159 config.go:182] Loaded profile config "flannel-082524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-082524 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gmlbd" [42df08f8-4eb9-4410-b8e5-201bf660aa23] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gmlbd" [42df08f8-4eb9-4410-b8e5-201bf660aa23] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.006038708s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-082524 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-082524 "pgrep -a kubelet"
I1123 09:07:25.602626 1043159 config.go:182] Loaded profile config "bridge-082524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-082524 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4ffd5" [05d26b90-1ca6-4635-bd78-d1ee30b29bfa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1123 09:07:29.684429 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/no-preload-591175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-4ffd5" [05d26b90-1ca6-4635-bd78-d1ee30b29bfa] Running
E1123 09:07:30.762695 1043159 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/auto-082524/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003863427s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-082524 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-082524 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-178439 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-178439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-178439
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-880590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-880590
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-082524 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-082524

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-082524

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-082524

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-082524

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-082524

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-082524

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-082524

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-082524

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-082524

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-082524

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-082524

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-082524" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-082524" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:47:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-354226
contexts:
- context:
cluster: kubernetes-upgrade-354226
user: kubernetes-upgrade-354226
name: kubernetes-upgrade-354226
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-354226
user:
client-certificate: /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/kubernetes-upgrade-354226/client.crt
client-key: /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/kubernetes-upgrade-354226/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-082524

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-082524"

                                                
                                                
----------------------- debugLogs end: kubenet-082524 [took: 5.5209211s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-082524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-082524
--- SKIP: TestNetworkPlugins/group/kubenet (5.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-082524 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-082524" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-1041293/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:51:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-354226
contexts:
- context:
cluster: kubernetes-upgrade-354226
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:51:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-354226
name: kubernetes-upgrade-354226
current-context: kubernetes-upgrade-354226
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-354226
user:
client-certificate: /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/kubernetes-upgrade-354226/client.crt
client-key: /home/jenkins/minikube-integration/21966-1041293/.minikube/profiles/kubernetes-upgrade-354226/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-082524

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-082524" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-082524"

                                                
                                                
----------------------- debugLogs end: cilium-082524 [took: 5.523570239s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-082524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-082524
--- SKIP: TestNetworkPlugins/group/cilium (5.73s)

                                                
                                    
Copied to clipboard